An NLP Model that Suggest a List of Words in an Incomplete Sentence

℡╲_俬逩灬. 提交于 2021-01-28 11:58:16

问题


I have somewhat read a bunch of papers which talks about predicting missing words in a sentence. What I really want is to create a model that suggest a word from an incomplete sentence.

  Example:

  Incomplete Sentence :
  I bought an ___________  because its rainy.

  Suggested Words:
      umbrella
      soup
      jacket

From the journal I have read, they have utilized Microsoft Sentence Completion Dataset for predicting missing words from a sentence.

  Example :

  Incomplete Sentence :

  Im sad because you are __________

  Missing Word Options:
  a) crying
  b) happy
  c) pretty
  d) sad
  e) bad

I don't want to predict a missing word from a list of options. I want to suggest a list of words from an incomplete sentence. Is it feasible? Please enlighten me cause Im really confused. What is state of the art model I can use for suggesting a list of words (semantically coherent) from an incomplete sentence?

Is it necessary that the list of suggested words as an output is included in the training dataset?


回答1:


This is exactly how the BERT model was trained: mask some random words in the sentence, and make your network predict these words. So yes, it is feasible. And not, it is not necessary to have the list of suggested words as a training input. However, these suggested words should be the part of the overall vocabulary with which this BERT has been trained.

I adapted this answer to show how the completion function may work.

# install this package to obtain the pretrained model
# ! pip install -U pytorch-pretrained-bert

import torch
from pytorch_pretrained_bert import BertTokenizer, BertForMaskedLM

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval(); # turning off the dropout

def fill_the_gaps(text):
    text = '[CLS] ' + text + ' [SEP]'
    tokenized_text = tokenizer.tokenize(text)
    indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
    segments_ids = [0] * len(tokenized_text)
    tokens_tensor = torch.tensor([indexed_tokens])
    segments_tensors = torch.tensor([segments_ids])
    with torch.no_grad():
        predictions = model(tokens_tensor, segments_tensors)
    results = []
    for i, t in enumerate(tokenized_text):
        if t == '[MASK]':
            predicted_index = torch.argmax(predictions[0, i]).item()
            predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
            results.append(predicted_token)
    return results

print(fill_the_gaps(text = 'I bought an [MASK] because its rainy .'))
print(fill_the_gaps(text = 'Im sad because you are [MASK] .'))
print(fill_the_gaps(text = 'Im worried because you are [MASK] .'))
print(fill_the_gaps(text = 'Im [MASK] because you are [MASK] .'))

The [MASK] symbol indicates the missing words (there can be any number of them). [CLS] and [SEP] are BERT-specific special tokens. The outputs for these particular prints are

['umbrella']
['here']
['worried']
['here', 'here']

The duplication is not surprising - transformer NNs are generally good at copying words. And from semantic point of view, these symmetric continuations look indeed very likely.

Moreover, if it is not a random word which is missing, but exactly the last word (or last several words), you can utilize any language model (e.g. another famous SOTA language model, GPT-2) to complete the sentence.



来源:https://stackoverflow.com/questions/56822991/an-nlp-model-that-suggest-a-list-of-words-in-an-incomplete-sentence

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!