I looked into the documentation of urllib but all I could find on proxies was related to urlopen. However, I want to download a PDF from a given URL and store it locally but
You've got the right idea you're just missing a few things.
proxy = urllib2.ProxyHandler({'http': '127.0.0.1'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
urllib2.urlopen('http://www.google.com')
I beleive you can do something like this:
import urllib2
proxy = urllib2.ProxyHandler({'http': '123.96.220.2:81'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
with open('filename','wb') as f:
f.write(urllib2.urlopen(URL).read())
f.close()
since urllib2
doesnt have urlretrieve
you can just use urlopen
to get the same effect
you must have got the docs confused becuase urllib2
also doesnt have FancyURLopener
thats why youre getting the error
urllib2
is much better when handling proxies and such
for more info look here Urllib2 Docs