httplib

HTTPConnection.request not respecting timeout?

倾然丶 夕夏残阳落幕 提交于 2019-11-29 05:15:37
I'm trying to use HTTPConnection (2.7.8) to make a request and I've set the timeout to 10 with HTTPConnection(host, timeout=10) . However, HTTPConnection.request() doesn't seem to timeout after 10 seconds. In fact, HTTPConnection.timeout doesn't even seem to be read by HTTPConnection.request() (it's only read by HTTPConnection.connect() . Is my understanding correct? Is timeout only applicable to connect() and not request() ? Is there a way to timeout request() ? Update: I think I've narrowed the issue down further: if I don't provide the scheme, it won't respect the socket timeout. If the

Suds Error: BadStatusLine in httplib

删除回忆录丶 提交于 2019-11-29 00:42:35
I am using suds 0.3.6. When creating a suds client, I randomly get an error: httplib.py, _read_status(), line 355, class httplib.BadStatusLine' Here is the code used to create the client: imp = Import('http://www.w3.org/2001/XMLSchema') imp.filter.add('http://tempuri.org/encodedTypes') imp.filter.add('http://tempuri.org/') self.doctor = ImportDoctor(imp) self.client = Client(self.URL,doctor=self.doctor) What does this error mean and how can I fix it? Thanks! That means there is a problem on the server side which causes the HTTP server to reply with some junk instead of an ordinary 'HTTP/1.1

Permanent 'Temporary failure in name resolution' after running for a number of hours

送分小仙女□ 提交于 2019-11-28 23:15:56
After running for a number of hours on Linux, my Python 2.6 program that uses urllib2, httplib and threads, starts raising this error for every request: <class 'urllib2.URLError'> URLError(gaierror(-3, 'Temporary failure in name resolution'),) If I restart the program it starts working again. My guess is some kind of resource exhaustion but I don't know how to check for it. How do I diagnose and fix the problem? This was caused by a library's failure to close connections, leading to a large number of connections stuck in a CLOSE_WAIT state. Eventually this causes the 'Temporary failure in name

python httplib/urllib get filename

这一生的挚爱 提交于 2019-11-28 10:09:31
is there a possibillity to get the filename e.g. xyz.com/blafoo/showall.html if you work with urllib or httplib? so that i can save the file under the filename on the server? if you go to sites like xyz.com/blafoo/ you cant see the filename. Thank you To get filename from response http headers: import cgi response = urllib2.urlopen(URL) _, params = cgi.parse_header(response.headers.get('Content-Disposition', '')) filename = params['filename'] To get filename from the URL: import posixpath import urlparse path = urlparse.urlsplit(URL).path filename = posixpath.basename(path) Does not make much

Selenium headless browser webdriver [Errno 104] Connection reset by peer

与世无争的帅哥 提交于 2019-11-28 10:03:15
I am trying to scrape data from the URLs below. But selenium fails when driver.get(url) Some times the error is [Errno 104] Connection reset by peer , sometimes [Errno 111] Connection refused . On rare days it works just fine and on my mac with real browser the same spider works fine every single time. So this isn't related to my spider . Have tried many solutions like waiting got selectors on page, implicit wait, using selenium-requests yo pass proper request headers, etc. But nothing seems to work. http://www.snapdeal.com/offers/deal-of-the-day https://paytm.com/shop/g/paytm-home/exclusive

python httplib Name or service not known

南笙酒味 提交于 2019-11-28 09:24:27
I'm trying to use httplib to send credit card information to authorize.net. When i try to post the request, I get the following traceback: File "./lib/cgi_app.py", line 139, in run res = method() File "/var/www/html/index.py", line 113, in ProcessRegistration conn.request("POST", "/gateway/transact.dll", mystring, headers) File "/usr/local/lib/python2.7/httplib.py", line 946, in request self._send_request(method, url, body, headers) File "/usr/local/lib/python2.7/httplib.py", line 987, in _send_request self.endheaders(body) File "/usr/local/lib/python2.7/httplib.py", line 940, in endheaders

Python script to see if a web page exists without downloading the whole page?

我是研究僧i 提交于 2019-11-28 06:56:12
I'm trying to write a script to test for the existence of a web page, would be nice if it would check without downloading the whole page. This is my jumping off point, I've seen multiple examples use httplib in the same way, however, every site I check simply returns false. import httplib from httplib import HTTP from urlparse import urlparse def checkUrl(url): p = urlparse(url) h = HTTP(p[1]) h.putrequest('HEAD', p[2]) h.endheaders() return h.getreply()[0] == httplib.OK if __name__=="__main__": print checkUrl("http://www.stackoverflow.com") # True print checkUrl("http://stackoverflow.com

Python httplib SSL23_GET_SERVER_HELLO:unknown protocol

為{幸葍}努か 提交于 2019-11-28 04:19:47
问题 Note: this code works fine on Ubuntu but not on mac and instead of changing the mac/python settings locally I'm trying to make change to the code so it'll work everywhere.. import ssl import httplib httplib.HTTPConnection(server, port, timeout) but it throws error: [Errno 1] _ssl.c:503: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol now code's not using urllib.request instead using httplib I want to change the code so it'll take SSLv3 as default protocol, something like

Python urllib vs httplib?

被刻印的时光 ゝ 提交于 2019-11-28 03:15:13
When would someone use httplib and when urllib? What are the differences? I think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other? urllib (particularly urllib2) handles many things by default or has appropriate libs to do so. For example, urllib2 will follow redirects automatically and you can use cookiejar to handle login scripts. These are all things you'd have to code

How do I have python httplib accept untrusted certs?

一个人想着一个人 提交于 2019-11-28 02:38:03
问题 How do I have python httplib accept untrusted certs? I created a snake oil/self signed cert on my webserver, and my python client fails to connect as I am using a untrusted cert. I'd rather problematically fix this in my client code rather than have it trusted on my system. import httplib def main(): conn = httplib.HTTPSConnection("127.0.0.1:443") conn.request("HEAD","/") res = conn.getresponse() print res.status, res.reason data = res.read() print len(data) if __name__ == "__main__": main()