Really fast word ngram vectorization in R

前端 未结 2 1379
野的像风
野的像风 2020-12-13 15:33

edit: The new package text2vec is excellent, and solves this problem (and many others) really well.

text2vec on CRAN text2vec on github vignette that illustrates ngr

相关标签:
2条回答
  • 2020-12-13 16:24

    This is a really interesting problem, and one that I have spent a lot of time grappling with in the quanteda package. It involves three aspects that I will comment on, although it's only the third that really addresses your question. But the first two points explain why I have only focused on the ngram creation function, since -- as you point out -- that is where the speed improvement can be made.

    1. Tokenization. Here you are using string::str_split_fixed() on the space character, which is the fastest, but not the best method for tokenizing. We implemented this almost exactly the same was in quanteda::tokenize(x, what = "fastest word"). It's not the best because stringi can do much smarter implementations of whitespace delimiters. (Even the character class \\s is smarter, but slightly slower -- this is implemented as what = "fasterword"). Your question was not about tokenization though, so this point is just context.

    2. Tabulating the document-feature matrix. Here we also use the Matrix package, and index the documents and features (I call them features, not terms), and create a sparse matrix directly as you do in the code above. But your use of match() is a lot faster than the match/merge methods we were using through data.table. I am going to recode the quanteda::dfm() function since your method is more elegant and faster. Really, really glad I saw this!

    3. ngram creation. Here I think I can actually help in terms of performance. We implement this in quanteda through an argument to quanteda::tokenize(), called grams = c(1) where the value can be any integer set. Our match for unigrams and bigrams would be ngrams = 1:2, for instance. You can examine the code at https://github.com/kbenoit/quanteda/blob/master/R/tokenize.R, see the internal function ngram(). I've reproduced this below and made a wrapper so that we can directly compare it to your find_ngrams() function.

    Code:

    # wrapper
    find_ngrams2 <- function(x, ngrams = 1, concatenator = " ") { 
        if (sum(1:length(ngrams)) == sum(ngrams)) {
            result <- lapply(x, ngram, n = length(ngrams), concatenator = concatenator, include.all = TRUE)
        } else {
            result <- lapply(x, function(x) {
                xnew <- c()
                for (n in ngrams) 
                    xnew <- c(xnew, ngram(x, n, concatenator = concatenator, include.all = FALSE))
                xnew
            })
        }
        result
    }
    
    # does the work
    ngram <- function(tokens, n = 2, concatenator = "_", include.all = FALSE) {
    
        if (length(tokens) < n) 
            return(NULL)
    
        # start with lower ngrams, or just the specified size if include.all = FALSE
        start <- ifelse(include.all, 
                        1, 
                        ifelse(length(tokens) < n, 1, n))
    
        # set max size of ngram at max length of tokens
        end <- ifelse(length(tokens) < n, length(tokens), n)
    
        all_ngrams <- c()
        # outer loop for all ngrams down to 1
        for (width in start:end) {
            new_ngrams <- tokens[1:(length(tokens) - width + 1)]
            # inner loop for ngrams of width > 1
            if (width > 1) {
                for (i in 1:(width - 1)) 
                    new_ngrams <- paste(new_ngrams, 
                                        tokens[(i + 1):(length(tokens) - width + 1 + i)], 
                                        sep = concatenator)
            }
            # paste onto previous results and continue
            all_ngrams <- c(all_ngrams, new_ngrams)
        }
    
        all_ngrams
    }
    

    Here is the comparison for a simple text:

    txt <- c("The quick brown fox named Seamus jumps over the lazy dog.", 
             "The dog brings a newspaper from a boy named Seamus.")
    tokens <- tokenize(toLower(txt), removePunct = TRUE)
    tokens
    # [[1]]
    # [1] "the"    "quick"  "brown"  "fox"    "named"  "seamus" "jumps"  "over"   "the"    "lazy"   "dog"   
    # 
    # [[2]]
    # [1] "the"       "dog"       "brings"    "a"         "newspaper" "from"      "a"         "boy"       "named"     "seamus"   
    # 
    # attr(,"class")
    # [1] "tokenizedTexts" "list"     
    
    microbenchmark::microbenchmark(zach_ng <- find_ngrams(tokens, 2),
                                   ken_ng <- find_ngrams2(tokens, 1:2))
    # Unit: microseconds
    #                                expr     min       lq     mean   median       uq     max neval
    #   zach_ng <- find_ngrams(tokens, 2) 288.823 326.0925 433.5831 360.1815 542.9585 897.469   100
    # ken_ng <- find_ngrams2(tokens, 1:2)  74.216  87.5150 130.0471 100.4610 146.3005 464.794   100
    
    str(zach_ng)
    # List of 2
    # $ : chr [1:21] "the" "quick" "brown" "fox" ...
    # $ : chr [1:19] "the" "dog" "brings" "a" ...
    str(ken_ng)
    # List of 2
    # $ : chr [1:21] "the" "quick" "brown" "fox" ...
    # $ : chr [1:19] "the" "dog" "brings" "a" ...
    

    For your really large, simulated text, here is the comparison:

    tokens <- stri_split_fixed(sents1, ' ')
    zach_ng1_t1 <- system.time(zach_ng1 <- find_ngrams(tokens, 2))
    ken_ng1_t1 <- system.time(ken_ng1 <- find_ngrams2(tokens, 1:2))
    zach_ng1_t1
    #    user  system elapsed 
    # 230.176   5.243 246.389 
    ken_ng1_t1
    #   user  system elapsed 
    # 58.264   1.405  62.889 
    

    Already an improvement, I'd be delighted if this could be improved further. I also should be able to implement the faster dfm() method into quanteda so that you can get what you want simply through:

    dfm(sents1, ngrams = 1:2, what = "fastestword",
        toLower = FALSE, removePunct = FALSE, removeNumbers = FALSE, removeTwitter = TRUE)) 
    

    (That already works but is slower than your overall result, because the way you create the final sparse matrix object is faster - but I will change this soon.)

    0 讨论(0)
  • 2020-12-13 16:25

    Here is a test using the dev version of tokenizers, which you can get using devtools::install_github("ropensci/tokenizers").

    Using the definitions of sents1, sents2, and find_ngrams() above:

    library(stringi)
    library(magrittr)
    library(tokenizers)
    library(microbenchmark)
    library(pbapply)
    
    
    set.seed(198)
    sents1_sample <- sample(sents1, 1000)
    sents2_sample <- sample(sents2, 1000)
    
    test_sents1 <- microbenchmark(
      find_ngrams(stri_split_fixed(sents1_sample, ' '), n = 2), 
      tokenize_ngrams(sents1_sample, n = 2),
      times = 25)
    test_sents1
    

    Results:

    Unit: milliseconds
                                                         expr       min        lq       mean
     find_ngrams(stri_split_fixed(sents1_sample, " "), n = 2) 79.855282 83.292816 102.564965
                        tokenize_ngrams(sents1_sample, n = 2)  4.048635  5.147252   5.472604
        median         uq        max neval cld
     93.622532 109.398341 226.568870    25   b
      5.479414   5.805586   6.595556    25  a 
    

    Testing on sents2

    test_sents2 <- microbenchmark(
      find_ngrams(stri_split_fixed(sents2_sample, ' '), n = 2), 
      tokenize_ngrams(sents2_sample, n = 2),
      times = 25)
    test_sents2
    

    Results:

    Unit: milliseconds
                                                         expr      min       lq     mean
     find_ngrams(stri_split_fixed(sents2_sample, " "), n = 2) 509.4257 521.7575 562.9227
                        tokenize_ngrams(sents2_sample, n = 2) 288.6050 295.3262 306.6635
       median       uq      max neval cld
     529.4479 554.6749 844.6353    25   b
     306.4858 310.6952 332.5479    25  a 
    

    Checking just straight up timing

    timing <- system.time({find_ngrams(stri_split_fixed(sents1, ' '), n = 2)})
    timing
    
       user  system elapsed 
     90.499   0.506  91.309 
    
    timing_tokenizers <- system.time({tokenize_ngrams(sents1, n = 2)})
    timing_tokenizers
    
       user  system elapsed 
      6.940   0.022   6.964 
    
    timing <- system.time({find_ngrams(stri_split_fixed(sents2, ' '), n = 2)})
    timing
    
       user  system elapsed 
    138.957   3.131 142.581 
    
    timing_tokenizers <- system.time({tokenize_ngrams(sents2, n = 2)})
    timing_tokenizers
    
       user  system elapsed 
      65.22    1.57   66.91
    

    A lot will depend on the texts being tokenized, but that seems to indicate a speedup of 2x to 20x.

    0 讨论(0)
提交回复
热议问题