R and tm package: create a term-document matrix with a dictionary of one or two words?

本秂侑毒 提交于 2019-12-03 08:55:08

I ran into the same problem and found that token counting functions from the TM package rely on an option called wordLengths, which is a vector of two numbers -- the minimum and the maximum length of tokens to keep track of. By default, TM uses a minimum word length of 3 characters (wordLengths = c(3, Inf)). You can override this option by adding it to the control list in a call to DocumentTermMatrix like this:

DocumentTermMatrix(my.corpus,
                   control=list(
                       tokenize=newBigramTokenizer,
                       wordLengths = c(1, Inf)))

However, your 'jedi' word is more than 3 characters long. Although, you probably tweaked the option's value earlier while trying to figure out how to count ngrams, so still try this. Also, look at the bounds option, which tells TM to discard words less or more frequent than specified values.

I noticed that NGramTokenizer returns character(0) when a one-word string is submitted as input and NGramTokenizer is asked to return unigrams and bigrams.

NGramTokenizer('jedi',  Weka_control(min = 1, max = 2))
# character(0)

I am not sure why this is the output, but I believe this behavior is the reason why the keyword jedi was not counted in Doc 3. However, a simple if-then-else solution appears to work for my situation: both for the sample set and my actual data set.

library(tm)
library(RWeka)    

my.docs = c('jedi master', 'jedi grandmaster', 'jedi')
my.corpus = Corpus(VectorSource(my.docs))
my.dict = c('jedi master', 'jedi grandmaster', 'jedi')

newBigramTokenizer = function(x) {
  tokenizer1 = NGramTokenizer(x, Weka_control(min = 1, max = 2))
  if (length(tokenizer1) != 0L) { return(tokenizer1)
  } else return(WordTokenizer(x))
} # WordTokenizer is an another tokenizer in the RWeka package.

inspect(DocumentTermMatrix(my.corpus, control=list(tokenize=newBigramTokenizer,
                                                 dictionary=my.dict)))

# <<DocumentTermMatrix (documents: 3, terms: 3)>>
# ...
# Docs jedi jedi grandmaster jedi master
#   1    1                0           1
#   2    1                1           0
#   3    1                0           0

Please let me know if anyone finds a "gotcha" that I am not considering in the code above. I would also appreciate any insight into why NGramTokenizer returns character(0) in my observation above.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!