Extracting Key-Phrases from text based on the Topic with Python

后端 未结 3 408
被撕碎了的回忆
被撕碎了的回忆 2021-01-03 03:36

I have a large dataset with 3 columns, columns are text, phrase and topic. I want to find a way to extract key-phrases (phrases column) based on the topic. Key-Phrase can b

相关标签:
3条回答
  • 2021-01-03 04:13

    I think what your looking for is called "Topic modeling" in NLP. you should try using LDA for topic modeling. It's one of easiest methods to apply. also as @Mike mentioned, converting word to vector has many approaches. You should first try simple approaches like count vectorizer and then gradually move to something like word-2-vect or glove.

    I am attaching some links for applying LDA to the corpus. 1. https://towardsdatascience.com/nlp-extracting-the-main-topics-from-your-dataset-using-lda-in-minutes-21486f5aa925 2. https://www.machinelearningplus.com/nlp/topic-modeling-visualization-how-to-present-results-lda-models/

    0 讨论(0)
  • 2021-01-03 04:17

    It looks like a good approach here would be to use a Latent Dirichlet allocation model, which is an example of what are known as topic models.


    A LDA is a an unsupervised model that finds similar groups among a set of observations, which you can then use to assign a topic to each of them. Here I'll go through what could be an approach to solve this by training a model using the sentences in the text column. Though in the case the phrases are representative enough an contain the necessary information to be captured by the models, then they could also be a good (possibly better) candidate for training the model, though that you'll better judge by yourself.

    Before you train the model, you need to apply some preprocessing steps, including tokenizing the sentences, removing stopwords, lemmatizing and stemming. For that you can use nltk:

    from nltk.stem import WordNetLemmatizer
    from nltk.corpus import stopwords
    from nltk.tokenize import word_tokenize
    import lda
    from sklearn.feature_extraction.text import CountVectorizer
    
    ignore = set(stopwords.words('english'))
    stemmer = WordNetLemmatizer()
    text = []
    for sentence in df.text:
        words = word_tokenize(sentence)
        stemmed = []
        for word in words:
            if word not in ignore:
                stemmed.append(stemmer.lemmatize(word))
        text.append(' '.join(stemmed))
    

    Now we have more appropriate corpus to train the model:

    print(text)
    
    ['great game lot amazing goal team',
     'goalkeeper team made misteke',
     'four grand slam championchips',
     'best player three-point line',
     'Novak Djokovic best player time',
     'amazing slam dunk best player',
     'deserved yellow-card foul',
     'free throw point']
    

    We can then convert the text to a matrix of token counts through CountVectorizer, which is the input LDA will be expecting:

    vec = CountVectorizer(analyzer='word', ngram_range=(1,1))
    X = vec.fit_transform(text)
    

    Note that you can use the ngram parameter to spacify the n-gram range you want to consider to train the model. By setting ngram_range=(1,2) for instance you'd end up with features containing all individual words as well as 2-grams in each sentence, here's an example having trained CountVectorizer with ngram_range=(1,2):

    vec.get_feature_names()
    ['amazing',
     'amazing goal',
     'amazing slam',
     'best',
     'best player',
     ....
    

    The advantage of using n-grams is that you could then also find Key-Phrases other than just single words.

    Then we can train the LDA with whatever amount of topics you want, in this case I'll just be selecting 3 topics (note that this has nothing to do with the topics column), which you can consider to be the Key-Phrases - or words in this case - that you mention. Here I'll be using lda, though there are several options such as gensim. Each topic will have associated a set of words from the vocabulary it has been trained with, with each word having a score measuring the relevance of the word in a topic.

    model = lda.LDA(n_topics=3, random_state=1)
    model.fit(X)
    

    Through topic_word_ we can now obtain these scores associated to each topic. We can use argsort to sort the vector of scores, and use it to index the vector of feature names, which we can obtain with vec.get_feature_names:

    topic_word = model.topic_word_
    
    vocab = vec.get_feature_names()
    n_top_words = 3
    
    for i, topic_dist in enumerate(topic_word):
        topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
        print('Topic {}: {}'.format(i, ' '.join(topic_words)))
    
    Topic 0: best player point
    Topic 1: amazing team slam
    Topic 2: yellow novak card
    

    The printed results don't really represent much in this case, since the model has been trained with the sample from the question, however you should see more clear and meaningful topics by training with your entire corpus.

    Also note that for this example I've use the whole vocabulary to train the model. However it seems that in your case what would make more sense, is to split the text column into groups according to the different topics you already have, and train a separate model on each group. But hopefully this gives you a good idea on how to proceed.

    0 讨论(0)
  • 2021-01-03 04:39

    It appears you're looking to group short pieces of text by topic. You will have to tokenize the data in one way or another. There are a variety of encodings that you could consider:

    Bag of words, which classifies by counting the frequency of each word in your vocabulary.

    TF-IDF: Does what's above but makes words that appear in more entries less important

    n_grams / bigrams / trigrams which essentially does the bag of words method but also maintains some context around each word. So you'll have encodings for each word but you'll also have tokens for "great_game", "game_with" and "great_game_with" etc.

    Orthogonal Sparse Bigrams (OSB)s Also create features that have the words further apart, like "great__with"

    Any of these options could be ideal for your dataset (the last two are likely your best bet). If none of these options work, There are a few more options you could try:


    First you could use word embeddings. These are vector representations of each word that unlike one-hot-encoding intrinsically contain word meaning. You can sum the words in a sentence together to get a new vector containing the general idea of what the sentence is about which can then be decoded.

    You can also use word embeddings alongside a Bidirectional LSTM. This is the most computationally intensive option but if your other options are not working this might be a good choice. biLSTMs try to interpret sentences by looking at the context around words to try to understand what the word might mean in that context.

    Hope this helps

    0 讨论(0)
提交回复
热议问题