问题
why is the content-lenght different in case of using requests
and urlopen(url).info()
>>> url = 'http://pymotw.com/2/urllib/index.html'
>>> requests.head(url).headers.get('content-length', None)
'8176'
>>> urllib.urlopen(url).info()['content-length']
'38227'
>>> len(requests.get(url).content)
38274
I was going to make a check for size of file in bytes to split the buffer to multiple threads based on Range
in urllib2
but if I do not have the actual size of file in bytes it won't work..
only len(requests.get(url).content)
gives 38274
which is closest but still not correct and moreover it is downloading the content which i didn't wanted.
回答1:
By default, requests will send 'Accept-Encoding': 'gzip'
as part of the request headers, and the server will respond with the compressed content:
>>> r = requests.head('http://pymotw.com/2/urllib/index.html')
r>>> r.headers['content-encoding'], r.headers['content-length']
('gzip', '8201')
But, if you manually set the request headers, then you'll get the uncompressed content:
>>> r = requests.head('http://pymotw.com/2/urllib/index.html',headers={'Accept-Encoding': 'identity'})
>>> r.headers['content-length']
'38227'
来源:https://stackoverflow.com/questions/24584956/get-file-size-before-downloading-using-http-header-not-matching-with-one-retriev