hi i have been using this code snippet to download files from a website, so far files smaller than 1GB are all good. but i noticed a 1.5GB file is incomplete
# s
I think you forgot to close req
.
from the requests author said, "If you find yourself partially reading request bodies (or not reading them at all) while using stream=True, you should make the request within a with statement to ensure it’s always closed:"
http://2.python-requests.org//en/latest/user/advanced/#body-content-workflow.
If you are using Nginx as file system, you may check Nginx config file to see if you have set
proxy_max_temp_file_size 3000m;
or not.
By default this size is 1G
. So you can only get 1024MB
.
Please double check that you can download the file via wget
and/or any regular browser. It could be restriction on the server. As I see your code can download big files (bigger then 1.5Gb)
Update: please try to inverse the logic - instead of
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
try
if not chunk:
break
f.write(chunk)
f.flush()