How to fetch a non-ascii url with Python urlopen?

谁说胖子不能爱 提交于 2019-11-26 16:09:54

Strictly speaking URIs can't contain non-ASCII characters; what you have there is an IRI.

To convert an IRI to a plain ASCII URI:

  • non-ASCII characters in the hostname part of the address have to be encoded using the Punycode-based IDNA algorithm;

  • non-ASCII characters in the path, and most of the other parts of the address have to be encoded using UTF-8 and %-encoding, as per Ignacio's answer.

So:

import re, urlparse

def urlEncodeNonAscii(b):
    return re.sub('[\x80-\xFF]', lambda c: '%%%02x' % ord(c.group(0)), b)

def iriToUri(iri):
    parts= urlparse.urlparse(iri)
    return urlparse.urlunparse(
        part.encode('idna') if parti==1 else urlEncodeNonAscii(part.encode('utf-8'))
        for parti, part in enumerate(parts)
    )

>>> iriToUri(u'http://www.a\u0131b.com/a\u0131b')
'http://www.xn--ab-hpa.com/a%c4%b1b'

(Technically this still isn't quite good enough in the general case because urlparse doesn't split away any user:pass@ prefix or :port suffix on the hostname. Only the hostname part should be IDNA encoded. It's easier to encode using normal urllib.quote and .encode('idna') at the time you're constructing a URL than to have to pull an IRI apart.)

Python 3 has libraries to handle this situation. Use urllib.parse.urlsplit to split the URL into its components, and urllib.parse.quote to properly quote/escape the unicode characters and urllib.parse.urlunsplit to join it back together.

>>> import urllib.parse
>>> url = 'http://example.com/unicodè'
>>> url = urllib.parse.urlsplit(url)
>>> url = list(url)
>>> url[2] = urllib.parse.quote(url[2])
>>> url = urllib.parse.urlunsplit(url)
>>> print(url)
http://example.com/unicod%C3%A8

In python3, use the urllib.parse.quote function on the non-ascii string:

>>> from urllib.request import urlopen                                                                                                                                                            
>>> from urllib.parse import quote                                                                                                                                                                
>>> chinese_wikipedia = 'http://zh.wikipedia.org/wiki/Wikipedia:' + quote('首页')
>>> urlopen(chinese_wikipedia)

Encode the unicode to UTF-8, then URL-encode.

Use iri2uri method of httplib2. It makes the same thing as by bobin (is he/she the author of that?)

It is more complex than the accepted @bobince's answer suggests:

  • netloc should be encoded using IDNA;
  • non-ascii URL path should be encoded to UTF-8 and then percent-escaped;
  • non-ascii query parameters should be encoded to the encoding of a page URL was extracted from (or to the encoding server uses), then percent-escaped.

This is how all browsers work; it is specified in https://url.spec.whatwg.org/ - see this example. A Python implementation can be found in w3lib (this is the library Scrapy is using); see w3lib.url.safe_url_string:

from w3lib.url import safe_url_string
url = safe_url_string(u'http://example.org/Ñöñ-ÅŞÇİİ/', encoding="<page encoding>")

An easy way to check if a URL escaping implementation is incorrect/incomplete is to check if it provides 'page encoding' argument or not.

For those not depending strictly on urllib, one practical alternative is requests, which handles IRIs "out of the box".

For example, with http://bücher.ch:

>>> import requests
>>> r = requests.get(u'http://b\u00DCcher.ch')
>>> r.status_code
200

Based on @darkfeline answer:

from urllib.parse import urlsplit, urlunsplit, quote

def iri2uri(iri):
    """
    Convert an IRI to a URI (Python 3).
    """
    uri = ''
    if isinstance(iri, str):
        (scheme, netloc, path, query, fragment) = urlsplit(iri)
        scheme = quote(scheme)
        netloc = netloc.encode('idna').decode('utf-8')
        path = quote(path)
        query = quote(query)
        fragment = quote(fragment)
        uri = urlunsplit((scheme, netloc, path, query, fragment))

    return uri

works! finally

I could not avoid from this strange characters, but at the end I come through it.

import urllib.request
import os


url = "http://www.fourtourismblog.it/le-nuove-tendenze-del-marketing-tenere-docchio/"
with urllib.request.urlopen(url) as file:
    html = file.read()
with open("marketingturismo.html", "w", encoding='utf-8') as file:
    file.write(str(html.decode('utf-8')))
os.system("marketingturismo.html")
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!