urllib2.URLError:

前端 未结 6 1054
小蘑菇
小蘑菇 2020-11-30 09:10

If I run:

urllib2.urlopen(\'http://google.com\')

even if I use another url, I get the same error.

I\'m pretty sure there is no fire

相关标签:
6条回答
  • 2020-11-30 09:20

    The problem, in my case, was that some install at some point defined an environment variable http_proxy on my machine when I had no proxy.

    Removing the http_proxy environment variable fixed the problem.

    0 讨论(0)
  • 2020-11-30 09:23

    add s to the http i.e urllib2.urlopen('https://google.com')

    worked for me

    0 讨论(0)
  • 2020-11-30 09:26

    This may not help you if it's a network-level issue but you can get some debugging info by setting debuglevel on httplib. Try this:

    import urllib, urllib2, httplib
    
    url = 'http://www.mozillazine.org/atom.xml'
    httplib.HTTPConnection.debuglevel = 1
    
    print "urllib"
    
    data = urllib.urlopen(url);
    
    print "urllib2"
    
    request = urllib2.Request(url)
    opener = urllib2.build_opener()
    feeddata = opener.open(request).read()
    

    Which is copied directly from here, hope that's kosher: http://bytes.com/topic/python/answers/517894-getting-debug-urllib2

    0 讨论(0)
  • 2020-11-30 09:34

    To troubleshoot the issue:

    1. let us know on what OS is the script running and what version of Python
    2. In command prompt on that very same machine, do ping google.com and observe if that works (or you get say "could not find host")
    3. If (2) worked, open browser on that machine (try in IE if on Windows) and try opening "google.com" there. If there is a problem, look closely at proxy settings in Internet Options / Connections / LAN Settings

    Let us know how it goes either way.

    0 讨论(0)
  • 2020-11-30 09:39

    The site's DNS record is such that Python fails the DNS lookup in a peculiar way: it finds the entry, but zero associated IP addresses. (Verify with nslookup.) Hence, 11004, WSANO_DATA.

    Prefix the site with 'www.' and try the request again. (Use nslookup to verify that its result is different, too.)

    This fails essentially the same way with the Python Requests module:

    requests.exceptions.ConnectionError: HTTPConnectionPool(host='...', port=80): Max retries exceeded with url: / (Caused by : [Errno 11004] getaddrinfo failed)

    0 讨论(0)
  • 2020-11-30 09:41

    You probably need to use a proxy. Check your normal browser settings to find out which. Take a look at opening websites using urllib2 from behind corporate firewall - 11004 getaddrinfo failed for a similar problem with solution.,

    0 讨论(0)
提交回复
热议问题