Easiest way to run scrapy crawler so it doesn't block the script

前端 未结 2 594
暗喜
暗喜 2020-12-14 13:27

The official docs give many ways for running scrapy crawlers from code:

import scrapy
from scrapy.crawler import CrawlerProcess

class MySpider(         


        
相关标签:
2条回答
  • 2020-12-14 14:21

    I tried every solution I could find, and the only working for me was this. But in order to make it work with scrapy 1.1rc1 I had to tweak it a little bit:

    from scrapy.crawler import Crawler
    from scrapy import signals
    from scrapy.utils.project import get_project_settings
    from twisted.internet import reactor
    from billiard import Process
    
    class CrawlerScript(Process):
        def __init__(self, spider):
            Process.__init__(self)
            settings = get_project_settings()
            self.crawler = Crawler(spider.__class__, settings)
            self.crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
            self.spider = spider
    
        def run(self):
            self.crawler.crawl(self.spider)
            reactor.run()
    
    def crawl_async():
        spider = MySpider()
        crawler = CrawlerScript(spider)
        crawler.start()
        crawler.join()
    

    So now when I call crawl_async, it starts crawling and doesn't block my current thread. I'm absolutely new to scrapy, so may be this isn't a very good solution but it worked for me.

    I used these versions of the libraries:

    cffi==1.5.0
    Scrapy==1.1rc1
    Twisted==15.5.0
    billiard==3.3.0.22
    
    0 讨论(0)
  • 2020-12-14 14:22

    Netimen's answer is correct. process.start() calls reactor.run(), which blocks the thread. Just that I don't think it is necessary to subclass billiard.Process. Although poorly documented, billiard.Process does have a set of APIs to call another function asynchronously without subclassing.

    from scrapy.crawler import CrawlerProcess
    from scrapy.utils.project import get_project_settings
    
    from billiard import Process
    
    crawler = CrawlerProcess(get_project_settings())
    process = Process(target=crawler.start, stop_after_crawl=False)
    
    
    def crawl(*args, **kwargs):
        crawler.crawl(*args, **kwargs)
        process.start()
    

    Note that if you don't have stop_after_crawl=False, you may run into ReactorNotRestartable exception when you run the crawler for more than twice.

    0 讨论(0)
提交回复
热议问题