Python urlparse — extract domain name without subdomain

前提是你 提交于 2019-11-27 06:48:22
Gareth Latty

You probably want to check out tldextract, a library designed to do this kind of thing.

It uses the Public Suffix List to try and get a decent split based on known gTLDs, but do note that this is just a brute-force list, nothing special, so it can get out of date (although hopefully it's curated so as not to).

>>> import tldextract
>>> tldextract.extract('http://forums.news.cnn.com/')
ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')

So in your case:

>>> extracted = tldextract.extract('http://www.google.com')
>>> "{}.{}".format(extracted.domain, extracted.suffix)
"google.com"

This is an update, based on the bounty request for an updated answer

Start by using the tld package. A description of the package:

Extracts the top level domain (TLD) from the URL given. List of TLD names is taken from Mozilla http://mxr.mozilla.org/mozilla/source/netwerk/dns/src/effective_tld_names.dat?raw=1

from tld import get_tld
from tld.utils import update_tld_names
update_tld_names()

print get_tld("http://www.google.co.uk")
print get_tld("http://zap.co.it")
print get_tld("http://google.com")
print get_tld("http://mail.google.com")
print get_tld("http://mail.google.co.uk")
print get_tld("http://google.co.uk")

This outputs

google.co.uk
zap.co.it
google.com
google.com
google.co.uk
google.co.uk

Notice that it correctly handles country level TLDs by leaving co.uk and co.it, but properly removes the www and mail subdomains for both .com and .co.uk

The update_tld_names() call at the beginning of the script is used to update/sync the tld names with the most recent version from Mozilla.

This is not a standard decomposition of the URLs.

You cannot rely on the www. to be present or optional. In a lot of cases it will not.

So if you do want to assume that only the last two components are relevant (which also won't work for the uk, e.g. www.google.co.uk) then you can do a split('.')[-2:].

Or, which is actually less error prone, strip a www. prefix.

But in either way you cannot assume that the www. is optional, because it will NOT work every time!

Here is a list of common suffixes for domains. You can try to keep the suffix + one component.

https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1

But how do you plan to handle for example first.last.name domains? Assume that all the users with the same last name are the same company? Initially, you would only be able to get third-level domains there. By now, you apparently can get second level, too. So for .name there is no general rule.

For domain name manipulation, you can also use Dnspy

It helps extract domains (and domain labels) at various levels, using a fresh copy of Mozilla Public Suffix list.

Using the tldexport works fine, but apparently has a problem while parsing the blogspot.com subdomain and create a mess. If you would like to go ahead with that library, make sure to implement an if condition or something to prevent returning an empty string in the subdomain.

from tld import get_tld
from tld.utils import update_tld_names
update_tld_names()

result=get_tld('http://www.google.com')
print 'https://'+result

Input: http://www.google.com

Result: google.com

There are multiple Python modules which encapsulate the (once Mozilla) Public Suffix List in a library, several of which don't require the input to be a URL. Even though the question asks about URL normalization specifically, my requirement was to handle just domain names, and so I'm offering a tangential answer for that.

The relative merits of publicsuffix2 over publicsuffixlist or publicsuffix are unclear, but they all seem to offer the basic functionality.

publicsuffix2:

>>> import publicsuffix  # sic
>>> publicsuffix.PublicSuffixList().get_public_suffix('www.google.co.uk')
u'google.co.uk'
  • Supposedly more packaging-friendly fork of publicsuffix.

publicsuffixlist:

>>> import publicsuffixlist
>>> publicsuffixlist.PublicSuffixList().privatesuffix('www.google.co.uk')
'google.co.uk'
  • Advertises idna support, which I however have not tested.

publicsuffix:

>>> import publicsuffix
>>> publicsuffix.PublicSuffixList(publicsuffix.fetch()).get_public_suffix('www.google.co.uk')
'google.co.uk'
  • The requirement to handle the updates and caching the downloaded file yourself is a bit of a complication.
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!