Python 3 urllib Vs requests performance

后端 未结 1 1364
一整个雨季
一整个雨季 2021-02-10 04:51

I\'m using python 3.5 and I\'m checking the performance of urllib module Vs requests module. I wrote two clients in python the first one is using the urllib module and the secon

相关标签:
1条回答
  • 2021-02-10 05:29

    First of all, to reproduce the problem, I had to add the following line to your onStringSend function:

    request.get_data()
    

    Otherwise, I was getting “connection reset by peer” errors because the server’s receive buffer kept filling up.

    Now, the immediate reason for this problem is that Response.content (which is called implicitly when stream=False) iterates over the response data in chunks of 10240 bytes:

    self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
    

    Therefore, the easiest way to solve the problem is to use stream=True, thus telling Requests that you will be reading the data at your own pace:

    response_data = s.post(url=url, data=data, stream=True, verify=False).raw.read()
    

    With this change, the performance of the Requests version becomes more or less the same as that of the urllib version.

    Please also see the “Raw Response Content” section in the Requests docs for useful advice.

    Now, the interesting question remains: why is Response.content iterating in such small chunks? After talking to Cory Benfield, a core developer of Requests, it looks like there may be no particular reason. I filed issue #3186 in Requests to look further into this.

    0 讨论(0)
提交回复
热议问题