问题
I have a data column of the following format:
Text
Hello world
Hello
How are you today
I love stackoverflow
blah blah blahdy
I would like to compute the 3-grams for each row in this dataset by perhaps using the tau
package's textcnt()
function. However, when I tried it, it gave me one numeric vector with the ngrams for the entire column. How can I apply this function to each observation in my data separately?
回答1:
Is this what you're after?
library("RWeka")
library("tm")
TrigramTokenizer <- function(x) NGramTokenizer(x,
Weka_control(min = 3, max = 3))
# Using Tyler's method of making the 'Text' object here
tdm <- TermDocumentMatrix(Corpus(VectorSource(Text)),
control = list(tokenize = TrigramTokenizer))
inspect(tdm)
A term-document matrix (4 terms, 5 documents)
Non-/sparse entries: 4/16
Sparsity : 80%
Maximal term length: 20
Weighting : term frequency (tf)
Docs
Terms 1 2 3 4 5
are you today 0 0 1 0 0
blah blah blahdy 0 0 0 0 1
how are you 0 0 1 0 0
i love stackoverflow 0 0 0 1 0
回答2:
Here's an ngram approach using the qdap package
## Text <- readLines(n=5)
## Hello world
## Hello
## How are you today
## I love stackoverflow
## blah blah blahdy
library(qdap)
ngrams(Text, seq_along(Text), 3)
It's a list and you can access the components with typical list indexing.
Edit:
As far as your first approach try it like this:
library(tau)
sapply(Text, textcnt, method = "ngram")
## sapply(eta_dedup$title, textcnt, method = "ngram")
回答3:
Here's how using the quanteda package:
txt <- c("Hello world", "Hello", "How are you today", "I love stackoverflow", "blah blah blahdy")
require(quanteda)
dfm(txt, ngrams = 3, concatenator = " ", verbose = FALSE)
## Document-feature matrix of: 5 documents, 4 features.
## 5 x 4 sparse Matrix of class "dfmSparse"
## features
## docs how are you are you today i love stackoverflow blah blah blahdy
## text1 0 0 0 0
## text2 0 0 0 0
## text3 1 1 0 0
## text4 0 0 1 0
## text5 0 0 0 1
回答4:
I guess the OP wanted to use tau
but others didn't use that package. Here's how you do it in tau:
data = "Hello world\nHello\nHow are you today\nI love stackoverflow\n
blah blah blahdy"
bigram_tau <- textcnt(data, n = 2L, method = "string", recursive = TRUE)
This is gonna be as a trie but you can format it as more classic datam-frame type with tokens and size:
data.frame(counts = unclass(bigram_tau), size = nchar(names(bigram_tau)))
format(r)
I highly suggest using tau
because it performs really well with large data. I have used it for creating bigrams of 1 GB and it was both fast and smooth.
来源:https://stackoverflow.com/questions/17556085/compute-ngrams-for-each-row-of-text-data-in-r