Why isn\'t this simple Python code working?
import urllib
file = urllib.urlopen(\'http://www.google.com\')
print file.read()
This is the er
If you have wireshark, check what's being sent out and if there is anything coming back at all. It will help you debug the problem if you can see the GET request being sent.
Also i remember having similar problem like this once, what i did was flush my dns cache
(ipconfig /flushdns) and restarted. It fixed my problem. It doesn't hurt to try i guess.
for python 3:
import urllib.request
proxies=urllib.request.ProxyHandler({'http':None})
opener=urllib.request.build_opener(proxies)
urllib.request.install_opener(opener)
j=urllib.request.urlopen(url="https://google.com")
k=j.read()
print(k)
Your code is not the problem here.
Do you have any Proxy settings in your IE?
This says the python documentation for urllib.urlopen:
In a Windows environment, if no proxy environment variables are set,
proxy settings are obtained from the registry's Internet Settings
section.
Try using urllib2 if it is feasible to change some lines of code. Set the timeout argument in seconds
For example:
urllib2.urlopen(http://www.abc.com/api, timeout=20)
Here the connection persists for a longer duration. So if for example you are reading an XML file that is too large it avoids incomplete reading.
The above code will never work if the Net connection is slow or it breaks suddenly.