I\'m unsure how to do this. One way is:
import urllib.request;
urllib.request.urlretrieve(\'www.example.com/file.tar\', \'file.tar\')
Another w
As @jdi said, you may use the requests library. This is also mentioned on https://docs.python.org/2/library/urllib2.html. You will need to pip the library, e.g.
pip install requests
My code looks like this:
import requests
def download_file(url):
file = requests.get(url)
return file.text
It can't be easier.
I am moving what started as comments, to an answer...
Your second example is pretty much doing what it needs to, in making a request object with a custom header and then reading the results into a local file.
urlretrieve
is a higher level function so it only does exactly what the docs say: Downloads a network resource to a local file and tells you where the file is. If you don't like the slightly lower level approach of your second example and you want more higher-level functionality, you can look into using the Requests library