Scrapy CrawlSpider + Splash: how to follow links through linkextractor?

前端 未结 3 493
忘了有多久
忘了有多久 2021-02-09 10:46

I have the following code that is partially working,

class ThreadSpider(CrawlSpider):
    name = \'thread\'
    allowed_domains = [\'bbs.example.com\']
    star         


        
3条回答
  •  既然无缘
    2021-02-09 11:41

    Seems to be related to https://github.com/scrapy-plugins/scrapy-splash/issues/92

    Personnaly I use dont_process_response=True so response is HtmlResponse (which is required by the code in _request_to_follows).

    And I also redefine the _build_request method in my spyder, like so:

    def _build_request(self, rule, link):
        r = SplashRequest(url=link.url, callback=self._response_downloaded, args={'wait': 0.5}, dont_process_response=True)
        r.meta.update(rule=rule, link_text=link.text)
        return r 
    

    In the github issues, some users just redefine the _request_to_follow method in their class.

提交回复
热议问题