问题
I am doing some basic webscraping with RVest and am getting results to return, however the data isnt lining up with each other. Meaning, I am getting the items but they are out of order from the site so the 2 data elements I am scraping cant be joined in a data.frame.
library(rvest)
library(tidyverse)
base_url<- "https://www.uchealth.com/providers"
loc <- read_html(base_url) %>%
html_nodes('[class=locations]') %>%
html_text()
dept <- read_html(base_url) %>%
html_nodes('[class=department last]') %>%
html_text()
I was expecting to be able to create a dataframe of :
Location Department
Any suggestions? I was wondering if there is an index that would keep these items together but I didnt see anything.
EDIT: I tried this also and did not have any luck. It seems the location is getting an erroneous starting value:
scraping <- function(
base_url = "https://www.uchealth.com/providers"
)
{
loc <- read_html(base_url) %>%
html_nodes('[class=locations]') %>%
html_text()
dept <- read_html(base_url) %>%
html_nodes('[class=specialties]') %>%
html_text()
data.frame(
loc = ifelse(length(loc)==0, NA, loc),
dept = ifelse(length(dept)==0, NA, loc),
stringsAsFactors=F
)
}
回答1:
The problem you are facing, is not every child node is present in all of the parent nodes. The best way to handle these situations is to collect all parent nodes in a list/vector and then extract the desired information from each parent using the html_node
function. html_node
will always return 1 result for every node, even if it is NA.
library(rvest)
#read the page just onece
base_url<- "https://www.uchealth.com/providers"
page <- read_html(base_url)
#parse out the parent node for each parent
providers<-page %>% html_nodes('ul[id=providerlist]') %>% html_children()
#parse out the requested information from each child.
dept<-providers %>% html_node("[class ^= 'department']") %>% html_text()
location<-providers %>%html_node('[class=locations]') %>% html_text()
The length of providers, dept and location should all be equal.
回答2:
One, far more involved, option would be to first turn all the available data in each .searchresult
node into a dataframe, and then stack these using dplyr::bind_rows
. I think this goes beyond the your basic requirements, but it still answers your question in a roundabout way, and it might be useful for the more general case:
library(rvest)
library(tidyverse)
base_url<- "https://www.uchealth.com/providers"
html <- read_html(base_url)
# Extract `.searchresult` nodes.
res_list <- html %>%
html_nodes(".searchresult") %>%
unclass()
# Turn each node into a dataframe.
df_list <- res_list %>%
map(~ {html_nodes(., ".propertylist li") %>%
html_text(T) %>%
str_split(":", 2) %>%
map(~ str_trim(.) %>% cbind() %>% as_tibble()) %>%
bind_cols() %>%
set_names(.[1,]) %>%
.[-1,]
})
# Stack the dataframes, add the person names, and reorder the columns.
ucdf <- bind_rows(df_list) %>%
mutate(Name = map_chr(res_list, ~ html_node(., "h4") %>% html_text(T))) %>%
select(Name, 1:(ncol(.)-1))
Which returns:
# A tibble: 1,137 x 5
Name Title Locations Specialties Department
<chr> <chr> <chr> <chr> <chr>
1 Adrian Abre… Assistant Professor of Med… UC Health Physicians Office South (West Chest… nephrology Internal Medicine
2 Bassam G. A… Associate Professor of Cli… University of Cincinnati Medical Center: (513… nephrology, organ trans… Internal Medicine
3 Brian Adams… Professor, Director of Res… UC Health Physicians Office (Clifton - Piedmo… dermatology Dermatology
4 Opeolu M. A… Associate Professor of Eme… University of Cincinnati Medical Center: (513… emergency medicine, neu… Emergency Medicine
5 Caleb Adler… Professor in the Departmen… UC Health Psychiatry (Stetson Building): (513… psychiatrypsychology, m… Psychiatry & Beha…
6 John Adler,… Assistant Professor of Obs… UC Health Women's Center: (513) 475-8248, UC … gynecology, robotic sur… OB/GYN
7 Steven S. A… Assistant Professor UC Health Physicians Office (Clifton - Piedmo… orthopaedics, spine sur… Orthopaedics & Sp…
8 Surabhi Aga… Assistant Professor of Med… Hoxworth Center: (513) 475-8524, UC Health Ph… rheumatology, connectiv… Internal Medicine
9 Saad S. Ahm… Assistant Professor of Med… Hoxworth Center: (513) 584-7217 cardiovascular disease,… Internal Medicine
10 Syed Ahmad,… Professor of Surgery; Dire… UC Health Barrett Cancer Center: (513) 584-89… surgical oncology, canc… Surgery
# … with 1,127 more rows
来源:https://stackoverflow.com/questions/56673908/how-do-you-scrape-items-together-so-you-dont-lose-the-index