How to prevent Scrapy from URL encoding request URLs

让人想犯罪 __ 提交于 2020-01-13 08:44:11

问题


I would like Scrapy to not URL encode my Requests. I see that scrapy.http.Request is importing scrapy.utils.url which imports w3lib.url which contains the variable _ALWAYS_SAFE_BYTES. I just need to add a set of characters to _ALWAYS_SAFE_BYTES but I am not sure how to do that from within my spider class.

scrapy.http.Request relevant line:

fp.update(canonicalize_url(request.url))

canonicalize_url is from scrapy.utils.url, relevant line in scrapy.utils.url:

path = safe_url_string(_unquotepath(path)) or '/'

safe_url_string() is from w3lib.url, relevant lines in w3lib.url:

_ALWAYS_SAFE_BYTES = (b'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_.-')

within w3lib.url.safe_url_string():

_safe_chars = _ALWAYS_SAFE_BYTES + b'%' + _reserved + _unreserved_marks
return moves.urllib.parse.quote(s, _safe_chars)

回答1:


I wanted to not to encode [ and ] and I did this.

When creating a Request object scrapy applies some url encoding methods. To revert these you can utilize a custom middleware and change the url to your needs.

You could use a Downloader Middleware like this:

class MyCustomDownloaderMiddleware(object):

    def process_request(self, request, spider):
        request._url = request.url.replace("%5B", "[", 2)
        request._url = request.url.replace("%5D", "]", 2)

Don't forget to "activate" the middleware in settings.py like so:

DOWNLOADER_MIDDLEWARES = {
    'so.middlewares.MyCustomDownloaderMiddleware': 900,
}

My project is named so and in the folder there is a file middlewares.py. You need to adjust those to your environment.

Credit goes to: Frank Martin



来源:https://stackoverflow.com/questions/24884011/how-to-prevent-scrapy-from-url-encoding-request-urls

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!