问题
I'm using requests
to get a URL, such as:
while True:
try:
rv = requests.get(url, timeout=1)
doSth(rv)
except socket.timeout as e:
print e
except Exception as e:
print e
After it runs for a while, it quits working. No exception or any error, just like it suspended. I then stop the process by typing Ctrl+C from the console. It shows that the process is waiting for data:
.............
httplib_response = conn.getresponse(buffering=True) #httplib.py
response.begin() #httplib.py
version, status, reason = self._read_status() #httplib.py
line = self.fp.readline(_MAXLINE + 1) #httplib.py
data = self._sock.recv(self._rbufsize) #socket.py
KeyboardInterrupt
Why is this happening? Is there a solution?
回答1:
It appears that the server you're sending your request
to is throttling you - that is, it's sending bytes
with less than 1 second between each package (thus not triggering your timeout
parameter), but slow enough for it to appear to be stuck.
The only fix for this I can think of is to reduce the timeout
parameter, unless you can fix this throttling issue with the Server provider.
Do keep in mind that you'll need to consider latency
when setting the timeout
parameter, otherwise your connection will be dropped too quickly and might not work at all.
回答2:
The default requests doesn't not set a timeout for connection or read. If for some reason, the server cannot get back to the client within the time, the client will stuck at connecting or read, mostly the read for the response.
The quick resolution is to set a timeout value in the requests object, the approach is well described here: http://docs.python-requests.org/en/master/user/advanced/#timeouts (Thanks to the guys.)
If this resolves the issue, please kindly mark this a resolution. Thanks.
来源:https://stackoverflow.com/questions/39227820/requests-process-hangs