How can I web scraping without the problem of null website in R?

梦想的初衷 提交于 2021-01-28 04:13:44

问题


I need to extract information about species and I write the following code. However, I have a problem with some absent species. How is it possible to avoid this problem.

Q<-c("rvest","stringr","tidyverse","jsonlite")
lapply(Q,require,character.only=TRUE)

#This part was obtained by pagination that I not provided to have a short code
sp1<-as.matrix(c("https://www.gulfbase.org/species/Acanthilia-intermedia", "https://www.gulfbase.org/species/Achelous-floridanus",                                      "https://www.gulfbase.org/species/Achelous-ordwayi", "https://www.gulfbase.org/species/Achelous-spinicarpus","https://www.gulfbase.org/species/Achelous-spinimanus",                       
"https://www.gulfbase.org/species/Agolambrus-agonus",                         
"https://www.gulfbase.org/species/Agononida-longipes",                        
"https://www.gulfbase.org/species/Amphithrax-aculeatus",                      
"https://www.gulfbase.org/species/Anasimus-latus"))    
> sp1

GiveMeData<-function(url){ 
  sp1<-read_html(url)

  sp1selmax<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.figures--joined > div:nth-child(1)"
  Mindepth<-html_node(sp1,sp1selmax)
  mintext<-html_text(Mindepth)
  mintext

  sp1selmax<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.figures--joined > div:nth-child(2)"
  Maxdepth<-html_node(sp1,sp1selmax)
  maxtext<-html_text(Maxdepth)
  maxtext

  sp1seldist<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div:nth-child(2) > div:nth-child(2) > div"
  Distr<-html_node(sp1,sp1seldist)
  distext<-html_text(Distr)
  distext

  sp1habitat<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div:nth-child(3) > ul"
  Habit<-html_node(sp1,sp1habitat)
  habtext<-html_text(Habit)
  habtext

  sp1habitat2<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.field > ul > li"
  Habit2<-html_node(sp1,sp1habitat2)
  habtext2<-html_text(Habit2)
  habtext2

  sp1ref<-"#block-beaker-content > article > div > main > section.node--full__related"
  Ref<-html_node(sp1,sp1ref)
  reftext<-html_text(Ref)
  reftext

  mintext<-gsub("\n                  \n      Min Depth\n      \n                            \n                      ","",mintext)
  mintext<-gsub(" meters\n                  \n                    ","",mintext)
  maxtext<-gsub("\n                  \n      Max Depth\n      \n                            \n                      ","",maxtext)
  maxtext<-gsub(" meters\n                  \n","",maxtext)
  habtext<-gsub("\n",",",habtext)
  habtext<-gsub("\\s","",habtext)
  reftext<-gsub("\n\n",";",reftext)
  reftext<-gsub("\\s","",reftext)


  Info<-rbind(Info=c("Min", "Max", "Distribution", "Habitat", "MicroHabitat", "References"),Data=c(mintext,maxtext,distext,habtext,habtext2,reftext))
}

doit<-lapply(pag[1:10],GiveMeData)

The problem is the absent species. I tried with a small loop but I do not work.


回答1:


I guess there might be ways to improve GiveMeData function but using the already existing function we can use tryCatch to ignore the website which returns an error.

output <- lapply(c(sp1), function(x) tryCatch(GiveMeData(x), error = function(e){}))


来源:https://stackoverflow.com/questions/59701774/how-can-i-web-scraping-without-the-problem-of-null-website-in-r

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!