I am trying to make a word cloud from a list of phrases, many of which are repeated, instead of from individual words. My data looks something like this, with one column of
Your difficulty is that each element of df$names
is being treated as "document" by the functions of tm
. For example, the document John A
contains the words John
and A
. It sounds like you want to keep the names as is, and just count up their occurrence - you can just use table
for that.
library(wordcloud)
df<-data.frame(theNames=c("John", "John", "Joseph A", "Mary A", "Mary A", "Paul H C", "Paul H C"))
tb<-table(df$theNames)
wordcloud(names(tb),as.numeric(tb), scale=c(8,.3),min.freq=1,max.words=100, random.order=T, rot.per=.15, colors="black", vfont=c("sans serif","plain"))
Install RWeka and its dependencies, then try this:
library(RWeka)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
# ... other tokenizers
tok <- BigramTokenizer
tdmgram <- TermDocumentMatrix(df.corpus, control = list(tokenize = tok))
#... create wordcloud
The tokenizer-line above chops your text into phrases of length 2.
More specifically, it creates phrases of minlength 2 and maxlength 2.
Using Weka's general NGramTokenizer Algorithm, You can create different tokenizers (e.g minlength 1, maxlength 2), and you'll probably want to experiment with different lengths. You can also call them tok1, tok2 instead of the verbose "BigramTokenizer" I've used above.