Scrapy request+response+download time

前端 未结 3 1452
忘了有多久
忘了有多久 2021-02-14 01:28

UPD: Not close question because I think my way is not so clear as should be

Is it possible to get current request + response + download time for saving

相关标签:
3条回答
  • 2021-02-14 01:52

    I think the best solution is by using scrapy signals. Whenever the request reaches the downloader it emits request_reached_downloader signal. After download it emits response_downloaded signal. You can catch it from the spider and assign time and its differences to meta from there.

    @classmethod
        def from_crawler(cls, crawler, *args, **kwargs):
            spider = super(SignalSpider, cls).from_crawler(crawler, *args, **kwargs)
            crawler.signals.connect(spider.item_scraped, signal=signals.item_scraped)
            return spider
    

    More elaborate answer is on here

    0 讨论(0)
  • You could write a Downloader Middleware which would time each request. It would add a start time to the request before it's made and then a finish time when it's finished. Typically, arbitrary data such as this is stored in the Request.meta attribute. This timing information could later be read by your spider and added to your item.

    This downloader middleware sounds like it could be useful on many projects.

    0 讨论(0)
  • 2021-02-14 02:14

    Not sure if you need a Middleware here. Scrapy has a request.meta which you can query and yield. For download latency, simply yield

    download_latency=response.meta.get('download_latency'),
    

    The amount of time spent to fetch the response, since the request has been started, i.e. HTTP message sent over the network. This meta key only becomes available when the response has been downloaded. While most other meta keys are used to control Scrapy behavior, this one is supposed to be read-only.

    0 讨论(0)
提交回复
热议问题