How to stay alive in HTTP/1.1 using python urllib

陌路散爱 提交于 2019-12-07 11:17:40

问题


For now I am doing this: (Python3, urllib)

url = 'someurl'
headers = '(('HOST', 'somehost'), /  
            ('Connection', 'keep-alive'),/
            ('Accept-Encoding' , 'gzip,deflate'))
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor())
for h in headers:
    opener.addheaders.append(x)
data = 'some logging data' #username, pw etc.
opener.open('somesite/login.php, data)

res = opener.open(someurl)
data = res.read()
... some stuff here...
res1 = opener.open(someurl2)
data = res1.read()
etc.

What is happening is this;

I keep getting gzipped responses from server and I stayed logged in (I am fetching some content which is not available if I were not logged in) but I think the connection is dropping between every request opener.open;

I think that because connecting is very slow and it seems like there is new connection every time. Two questions:

a)How do I test if in fact the connection is staying-alive/dying
b)How to make it stay-alive between request for other urls ?

Take care :)


回答1:


This will be a very delayed answer, but:

You should see urllib3. It is for Python 2.x but you'll get the idea when you see their README document.

And yes, urllib by default doesn't keep connections alive, I'm now implementing urllib3 for Python 3 to be staying in my toolbag :)




回答2:


Just if you didn't know yet, python-requests offer keep-alive feature, thanks to urllib3.



来源:https://stackoverflow.com/questions/4385343/how-to-stay-alive-in-http-1-1-using-python-urllib

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!