What are the differences between the urllib, urllib2, urllib3 and requests module?

后端 未结 11 2265
隐瞒了意图╮
隐瞒了意图╮ 2020-11-22 04:19

In Python, what are the differences between the urllib, urllib2, urllib3 and requests modules? Why are there three? They seem to do the same thing...

相关标签:
11条回答
  • 2020-11-22 04:52

    To get the content of a url:

    try: # Try importing requests first.
        import requests
    except ImportError: 
        try: # Try importing Python3 urllib
            import urllib.request
        except AttributeError: # Now importing Python2 urllib
            import urllib
    
    
    def get_content(url):
        try:  # Using requests.
            return requests.get(url).content # Returns requests.models.Response.
        except NameError:  
            try: # Using Python3 urllib.
                with urllib.request.urlopen(index_url) as response:
                    return response.read() # Returns http.client.HTTPResponse.
            except AttributeError: # Using Python3 urllib.
                return urllib.urlopen(url).read() # Returns an instance.
    

    It's hard to write Python2 and Python3 and request dependencies code for the responses because they urlopen() functions and requests.get() function return different types:

    • Python2 urllib.request.urlopen() returns a http.client.HTTPResponse
    • Python3 urllib.urlopen(url) returns an instance
    • Request request.get(url) returns a requests.models.Response
    0 讨论(0)
  • I know it's been said already, but I'd highly recommend the requests Python package.

    If you've used languages other than python, you're probably thinking urllib and urllib2 are easy to use, not much code, and highly capable, that's how I used to think. But the requests package is so unbelievably useful and short that everyone should be using it.

    First, it supports a fully restful API, and is as easy as:

    import requests
    
    resp = requests.get('http://www.mywebsite.com/user')
    resp = requests.post('http://www.mywebsite.com/user')
    resp = requests.put('http://www.mywebsite.com/user/put')
    resp = requests.delete('http://www.mywebsite.com/user/delete')
    

    Regardless of whether GET / POST, you never have to encode parameters again, it simply takes a dictionary as an argument and is good to go:

    userdata = {"firstname": "John", "lastname": "Doe", "password": "jdoe123"}
    resp = requests.post('http://www.mywebsite.com/user', data=userdata)
    

    Plus it even has a built in JSON decoder (again, I know json.loads() isn't a lot more to write, but this sure is convenient):

    resp.json()
    

    Or if your response data is just text, use:

    resp.text
    

    This is just the tip of the iceberg. This is the list of features from the requests site:

    • International Domains and URLs
    • Keep-Alive & Connection Pooling
    • Sessions with Cookie Persistence
    • Browser-style SSL Verification
    • Basic/Digest Authentication
    • Elegant Key/Value Cookies
    • Automatic Decompression
    • Unicode Response Bodies
    • Multipart File Uploads
    • Connection Timeouts
    • .netrc support
    • List item
    • Python 2.6—3.4
    • Thread-safe.
    0 讨论(0)
  • 2020-11-22 04:57

    A key point that I find missing in the above answers is that urllib returns an object of type <class http.client.HTTPResponse> whereas requests returns <class 'requests.models.Response'>.

    Due to this, read() method can be used with urllib but not with requests.

    P.S. : requests is already rich with so many methods that it hardly needs one more as read() ;>

    0 讨论(0)
  • 2020-11-22 05:02

    Just to add to the existing answers, I don't see anyone mentioning that python requests is not a native library. If you are ok with adding dependencies, then requests is fine. However, if you are trying to avoid adding dependencies, urllib is a native python library that is already available to you.

    0 讨论(0)
  • 2020-11-22 05:04

    urllib and urllib2 are both Python modules that do URL request related stuff but offer different functionalities.

    1) urllib2 can accept a Request object to set the headers for a URL request, urllib accepts only a URL.

    2) urllib provides the urlencode method which is used for the generation of GET query strings, urllib2 doesn't have such a function. This is one of the reasons why urllib is often used along with urllib2.

    Requests - Requests’ is a simple, easy-to-use HTTP library written in Python.

    1) Python Requests encodes the parameters automatically so you just pass them as simple arguments, unlike in the case of urllib, where you need to use the method urllib.encode() to encode the parameters before passing them.

    2) It automatically decoded the response into Unicode.

    3) Requests also has far more convenient error handling.If your authentication failed, urllib2 would raise a urllib2.URLError, while Requests would return a normal response object, as expected. All you have to see if the request was successful by boolean response.ok

    0 讨论(0)
提交回复
热议问题