I\'d like to use the Python Requests library to GET a file from a url and use it as a mulitpart encoded file in a post request. The catch is that the file could be very larg
You can not turn anything you please into a context manager in python. It requires very specific attributes to be one. With your current code you can do the following:
response = requests.get(big_file_url, stream=True)
post_response = requests.post(upload_url, files={'file': ('filename', response.iter_content())})
Using iter_content
will ensure that your file is never in memory. The iterator will be used, otherwise by using the content
attribute the file will be loaded into memory.
Edit The only way to reasonably do this is to use chunk-encoded uploads, e.g.,
post_response = requests.post(upload_url, data=response.iter_content())
If you absolutely need to do multipart/form-data encoding then you will have to create an abstraction layer that will take the generator in the constructor, and the Content-Length
header from response
(to provide an answer for len(file)
) that will have a read attribute that will read from the generator. The issue again is that I'm pretty sure the entire thing will be read into memory before it will be uploaded.
Edit #2
You might be able to make a generator of your own that produces the multipart/form-data
encoded data yourself. You could pass that in the same way as you would chunk-encoded-requests but you'd have to make sure you set your own Content-Type
and Content-Length
headers. I don't have time to sketch an example but it shouldn't be too difficult.