Python urlparse — extract domain name without subdomain

前端 未结 7 875
南笙
南笙 2020-12-01 02:30

Need a way to extract a domain name without the subdomain from a url using Python urlparse.

For example, I would like to extract \"google.com\" from a f

相关标签:
7条回答
  • 2020-12-01 03:12

    Using the tldexport works fine, but apparently has a problem while parsing the blogspot.com subdomain and create a mess. If you would like to go ahead with that library, make sure to implement an if condition or something to prevent returning an empty string in the subdomain.

    0 讨论(0)
  • 2020-12-01 03:16

    This is an update, based on the bounty request for an updated answer

    Start by using the tld package. A description of the package:

    Extracts the top level domain (TLD) from the URL given. List of TLD names is taken from Mozilla http://mxr.mozilla.org/mozilla/source/netwerk/dns/src/effective_tld_names.dat?raw=1

    from tld import get_tld
    from tld.utils import update_tld_names
    update_tld_names()
    
    print get_tld("http://www.google.co.uk")
    print get_tld("http://zap.co.it")
    print get_tld("http://google.com")
    print get_tld("http://mail.google.com")
    print get_tld("http://mail.google.co.uk")
    print get_tld("http://google.co.uk")
    

    This outputs

    google.co.uk
    zap.co.it
    google.com
    google.com
    google.co.uk
    google.co.uk
    

    Notice that it correctly handles country level TLDs by leaving co.uk and co.it, but properly removes the www and mail subdomains for both .com and .co.uk

    The update_tld_names() call at the beginning of the script is used to update/sync the tld names with the most recent version from Mozilla.

    0 讨论(0)
  • 2020-12-01 03:17

    This is not a standard decomposition of the URLs.

    You cannot rely on the www. to be present or optional. In a lot of cases it will not.

    So if you do want to assume that only the last two components are relevant (which also won't work for the uk, e.g. www.google.co.uk) then you can do a split('.')[-2:].

    Or, which is actually less error prone, strip a www. prefix.

    But in either way you cannot assume that the www. is optional, because it will NOT work every time!

    Here is a list of common suffixes for domains. You can try to keep the suffix + one component.

    https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1

    But how do you plan to handle for example first.last.name domains? Assume that all the users with the same last name are the same company? Initially, you would only be able to get third-level domains there. By now, you apparently can get second level, too. So for .name there is no general rule.

    0 讨论(0)
  • 2020-12-01 03:19
    from tld import get_tld
    from tld.utils import update_tld_names
    update_tld_names()
    
    result=get_tld('http://www.google.com')
    print 'https://'+result
    

    Input: http://www.google.com

    Result: google.com

    0 讨论(0)
  • You probably want to check out tldextract, a library designed to do this kind of thing.

    It uses the Public Suffix List to try and get a decent split based on known gTLDs, but do note that this is just a brute-force list, nothing special, so it can get out of date (although hopefully it's curated so as not to).

    >>> import tldextract
    >>> tldextract.extract('http://forums.news.cnn.com/')
    ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')
    

    So in your case:

    >>> extracted = tldextract.extract('http://www.google.com')
    >>> "{}.{}".format(extracted.domain, extracted.suffix)
    "google.com"
    
    0 讨论(0)
  • 2020-12-01 03:25

    There are multiple Python modules which encapsulate the (once Mozilla) Public Suffix List in a library, several of which don't require the input to be a URL. Even though the question asks about URL normalization specifically, my requirement was to handle just domain names, and so I'm offering a tangential answer for that.

    The relative merits of publicsuffix2 over publicsuffixlist or publicsuffix are unclear, but they all seem to offer the basic functionality.

    publicsuffix2:

    >>> import publicsuffix  # sic
    >>> publicsuffix.PublicSuffixList().get_public_suffix('www.google.co.uk')
    u'google.co.uk'
    
    • Supposedly more packaging-friendly fork of publicsuffix.

    publicsuffixlist:

    >>> import publicsuffixlist
    >>> publicsuffixlist.PublicSuffixList().privatesuffix('www.google.co.uk')
    'google.co.uk'
    
    • Advertises idna support, which I however have not tested.

    publicsuffix:

    >>> import publicsuffix
    >>> publicsuffix.PublicSuffixList(publicsuffix.fetch()).get_public_suffix('www.google.co.uk')
    'google.co.uk'
    
    • The requirement to handle the updates and caching the downloaded file yourself is a bit of a complication.
    0 讨论(0)
提交回复
热议问题