Multiple web table mining with R, RCurl

江枫思渺然 提交于 2019-12-24 02:47:10

问题


First of all, thanks in advance for any responses.

I need to obtain a table by joining some smaller tables in their respective web pages. To date, I've been capable of extracting the info, but failed to do it automatically using a loop. To date, my commands are:

library(RCurl)
library(XML)
# index <- toupper(letters)
# EDIT:
index <- LETTERS

index[1] <- "0-A"
url <- paste("www.citefactor.org/journal-impact-factor-list-2014_", index, ".html", sep="", collapse=";")
urls <- strsplit(url, ";") [[1]]

Here is my loop attempt:

read.html.tab <- function(url){
 require(RCurl)
 require(XML)
 uri <- url
 tabs <- NULL
 for (i in uri){
  tabs <- getURL(uri)
  tabs <- readHTMLTable(tabs, stringsAsFactors = F)
  tab1 <- as.data.frame(tabs)
  }
 tab1
 }

If I try to use the read.html.tab function:

tab0 <- read.html.tab(urls)

I get the following error: Error in data.frame(`Search Journal Impact Factor List 2014` = list(`0-A` = "N", : arguments imply differing number of rows: 1, 1100, 447, 874, 169, 486, 201, 189, 172, 837....

However, if urls has only one element, the function works:

tabA <- read.html.tab(urls[1])
tabB <- read.html.tab(urls[2]) 
tab.if <- rbind(tabA,tabB)

ifacs <- tab.if[,27:ncol(tab.if)]
View(ifacs)

It seems I'm not understanding how loops work...


回答1:


You could probably just scrap the for loop completely and go with something like this:

Data <- lapply(urls, function(x){
  readHTMLTable(
    getURL(x),
    stringsAsFactors=F)[[2]]
})

which will give you a lists of data.frames -

R> class(Data)
[1] "list"
R> length(Data)
[1] 26
R> head(Data[[1]])
  INDEX                                        JOURNAL      ISSN 2013/2014  2012  2011  2010  2009  2008
1     1 4OR-A Quarterly Journal of Operations Research 1619-4500     0.918  0.73 0.323  0.69  0.75     -
2     2                                  Aaohn Journal 0891-0162     0.608 0.856 0.509  0.56     -     -
3     3                                  Aapg Bulletin 0149-1423     1.832 1.768 1.831 1.964 1.448 1.364
4     4                                   AAPS Journal 1550-7416     3.905 4.386 5.086 3.942  3.54     -
5     5                              Aaps Pharmscitech 1530-9932     1.776 1.584 1.432 1.211  1.19 1.445
6     6                                   Aatcc Review 1532-8813     0.254 0.354 0.139 0.315 0.293 0.352

I'm not sure if you wanted to combine it all into one object, but if so you can use do.call(rbind,Data). Also, I think each of these urls returned two tables, the first begin the search directory at the top of the page, which is why I used

readHTMLTable(
    getURL(x),
    stringsAsFactors=F)[[2]]

inside of lapply, rather than

readHTMLTable(
        getURL(x),
        stringsAsFactors=F)

The latter would have returned a list of two tables for each url -

R> head(url1[[1]])
  0-A &nbsp| B &nbsp| C &nbsp| D &nbsp| E &nbsp| F &nbsp| G &nbsp| H &nbsp| I &nbsp| J &nbsp| K &nbsp| L &nbsp| M &nbsp|
1   N &nbsp| O &nbsp| P &nbsp| Q &nbsp| R &nbsp| S &nbsp| T &nbsp| U &nbsp| V &nbsp| W &nbsp| X &nbsp| Y &nbsp| Z &nbsp|
##
R> head(url1[[2]])
  INDEX                                        JOURNAL      ISSN 2013/2014  2012  2011  2010  2009  2008
1     1 4OR-A Quarterly Journal of Operations Research 1619-4500     0.918  0.73 0.323  0.69  0.75     -
2     2                                  Aaohn Journal 0891-0162     0.608 0.856 0.509  0.56     -     -
3     3                                  Aapg Bulletin 0149-1423     1.832 1.768 1.831 1.964 1.448 1.364
4     4                                   AAPS Journal 1550-7416     3.905 4.386 5.086 3.942  3.54     -
5     5                              Aaps Pharmscitech 1530-9932     1.776 1.584 1.432 1.211  1.19 1.445
6     6                                   Aatcc Review 1532-8813     0.254 0.354 0.139 0.315 0.293 0.352



回答2:


Obligatory Hadleyverse answer:

library(rvest)
library(dplyr)
library(magrittr)
library(pbapply)

urls <- sprintf("http://www.citefactor.org/journal-impact-factor-list-2014_%s.html", 
                c("0-A", LETTERS[-1]))

dat <- urls %>%
  pblapply(function(url) 
    html(url) %>% html_table(header=TRUE) %>% extract2(2)) %>%
  bind_rows()

glimpse(dat)

## Observations: 1547
## Variables:
## $ INDEX     (int) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,...
## $ JOURNAL   (chr) "4OR-A Quarterly Journal of Operations Researc...
## $ ISSN      (chr) "1619-4500", "0891-0162", "0149-1423", "1550-7...
## $ 2013/2014 (chr) "0.918", "0.608", "1.832", "3.905", "1.776", "...
## $ 2012      (chr) "0.73", "0.856", "1.768", "4.386", "1.584", "0...
## $ 2011      (chr) "0.323", "0.509", "1.831", "5.086", "1.432", "...
## $ 2010      (chr) "0.69", "0.56", "1.964", "3.942", "1.211", "0....
## $ 2009      (chr) "0.75", "-", "1.448", "3.54", "1.19", "0.293",...
## $ 2008      (chr) "-", "-", "1.364", "-", "1.445", "0.352", "1.4...

rvest gives us html and html_table

I use magrittr solely for extract2, which just wraps [[ and reads better (IMO).

The pbapply package wraps the *apply functions and gives you free progress bars.

NOTE: bind_rows is in the latest dplyr, so grab that before using it.



来源:https://stackoverflow.com/questions/27882381/multiple-web-table-mining-with-r-rcurl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!