问题
I am trying to make a word cloud from a list of phrases, many of which are repeated, instead of from individual words. My data looks something like this, with one column of my data frame being a list of phrases.
df$names <- c("John", "John", "Joseph A", "Mary A", "Mary A", "Paul H C", "Paul H C")
I would like to make a word cloud where all of these names are treated as individual phrases whose frequency is displayed, not the words which make them up. The code I have been using looks like:
df.corpus <- Corpus(DataframeSource(data.frame(df$names)))
df.corpus <- tm_map(client.corpus, function(x) removeWords(x, stopwords("english")))
#turning that corpus into a tDM
tdm <- TermDocumentMatrix(df.corpus)
m <- as.matrix(tdm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
pal <- brewer.pal(9, "BuGn")
pal <- pal[-(1:2)]
#making a worcloud
png("wordcloud.png", width=1280,height=800)
wordcloud(d$word,d$freq, scale=c(8,.3),min.freq=2,max.words=100, random.order=T, rot.per=.15, colors="black", vfont=c("sans serif","plain"))
dev.off()
This creates a word cloud, but it is of each component word, not of the phrases. So, I see the relative frequency of "A". "H", "John" etc instead of the relative frequency of "Joseph A", "Mary A", etc, which is what I want.
I'm sure this isn't that complicated to fix, but I can't figure it out! I would appreciate any help.
回答1:
Your difficulty is that each element of df$names
is being treated as "document" by the functions of tm
. For example, the document John A
contains the words John
and A
. It sounds like you want to keep the names as is, and just count up their occurrence - you can just use table
for that.
library(wordcloud)
df<-data.frame(theNames=c("John", "John", "Joseph A", "Mary A", "Mary A", "Paul H C", "Paul H C"))
tb<-table(df$theNames)
wordcloud(names(tb),as.numeric(tb), scale=c(8,.3),min.freq=1,max.words=100, random.order=T, rot.per=.15, colors="black", vfont=c("sans serif","plain"))
回答2:
Install RWeka and its dependencies, then try this:
library(RWeka)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
# ... other tokenizers
tok <- BigramTokenizer
tdmgram <- TermDocumentMatrix(df.corpus, control = list(tokenize = tok))
#... create wordcloud
The tokenizer-line above chops your text into phrases of length 2.
More specifically, it creates phrases of minlength 2 and maxlength 2.
Using Weka's general NGramTokenizer Algorithm, You can create different tokenizers (e.g minlength 1, maxlength 2), and you'll probably want to experiment with different lengths. You can also call them tok1, tok2 instead of the verbose "BigramTokenizer" I've used above.
来源:https://stackoverflow.com/questions/26937960/creating-word-cloud-of-phrases-not-individual-words-in-r