问题
Is there a way to tell R or the RCurl package to give up on trying to download a webpage if it exceeds a specified period of time and move onto the next line of code? For example:
> library(RCurl)
> u = "http://photos.prnewswire.com/prnh/20110713/NY34814-b"
> getURL(u, followLocation = TRUE)
> print("next line") # programme does not get this far
This will just hang on my system and not proceed to the final line.
EDIT: Based on @Richie Cotton's answer below, while I can 'sort of' achieve what I want, I don't understand why it takes longer than expected. For example, if I do the following, the system hangs until I select/unselect the 'Misc >> Buffered Output' option in RGUI:
> system.time(getURL(u, followLocation = TRUE, .opts = list(timeout = 1)))
Error in curlPerform(curl = curl, .opts = opts, .encoding = .encoding) :
Operation timed out after 1000 milliseconds with 0 out of 0 bytes received
Timing stopped at: 0.02 0.08 ***6.76***
SOLUTION: Based on @Duncan's post below and then subsequently having a look at the curl docs, I found the solution by using the maxredirs option as follows:
> getURL(u, followLocation = TRUE, .opts = list(timeout = 1, maxredirs = 2, verbose = TRUE))
Thank you kindly,
Tony Breyal
O/S: Windows 7
R version 2.13.0 (2011-04-13) Platform: x86_64-pc-mingw32/x64 (64-bit)
attached base packages:
[1] stats graphics grDevices utils
datasets methods base
other attached packages:
[1] RCurl_1.6-4.1 bitops_1.0-4.1
loaded via a namespace (and not attached):
[1] tools_2.13.0
回答1:
I believe that the Web server is getting itself into a confused state by telling us that the URL is temporarily moved and then it points us to a new URL
http://photos.prnewswire.com/medias/switch.do?prefix=/appnb&page=/getStoryRemapDetails.do&prnid=20110713%252fN\ Y34814%252db&action=details
When we follow that, it redirects us again to .... the same URL!!!
So the timeout is not a problem. The response comes very quickly and so the timeout duration is not exceed. It is the fact that we go round and round in circles that causes the apparent hang.
The way I found this is by adding verbose = TRUE to the list of .opts Then we see all the communication between us and the server.
D.
回答2:
timeout
and connecttimeout
are curl options, so they need to be passed in a list to the .opts
paramter to getURL
. Not sure which of the two that you need, but start with
getURL(u, followLocation = TRUE, .opts = list(timeout = 3))
EDIT:
I can reproduce the hang; changing buffered output doesn't fix it for me (tested under R2.13.0 and R2.13.1), and it happens with or without the timeout argument. If you try getURL
on the page that is the target of the redirect, it appears blank.
u2 <- "http://photos.prnewswire.com/medias/switch.do?prefix=/appnb&page=/getStoryRemapDetails.do&prnid=20110713%252fNY34814%252db&action=details"
getURL(u2)
If you remove the page
argument, it redirects you to a login page; maybe PR Newswire is doing something funny with asking for credentials.
u3 <- "http://photos.prnewswire.com/medias/switch.do?prefix=/appnb&prnid=20110713%252fNY34814%252db&action=details"
getURL(u3)
来源:https://stackoverflow.com/questions/6733748/how-to-stop-execution-of-rcurlgeturl-if-it-is-taking-too-long