Scrapy - how to identify already scraped urls

后端 未结 5 1776
南笙
南笙 2020-12-05 08:28

Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on

相关标签:
5条回答
  • 2020-12-05 08:31

    For today (2019), this post is the best answer for this problem.

    https://blog.scrapinghub.com/2016/07/20/scrapy-tips-from-the-pros-july-2016

    It's a lib to handle MIDDLEWARES automatcally.

    Hope to help someone. I've spent a lot of time seaching for this.

    0 讨论(0)
  • 2020-12-05 08:42

    You can actually do this quite easily with the scrapy snippet located here: http://snipplr.com/view/67018/middleware-to-avoid-revisiting-already-visited-items/

    To use it, copy the code from the link and put it into some file in your scrapy project. To reference it, add a line in your settings.py to reference it:

    SPIDER_MIDDLEWARES = { 'project.middlewares.ignore.IgnoreVisitedItems': 560 }
    

    The specifics on WHY you pick the number that you do can be read up here: http://doc.scrapy.org/en/latest/topics/downloader-middleware.html

    Finally, you'll need to modify your items.py so that each item class has the following fields:

    visit_id = Field()
    visit_status = Field()
    

    And I think that's it. The next time you run your spider it should automatically try to start avoiding the same sites.

    Good luck!

    0 讨论(0)
  • 2020-12-05 08:46

    This is straight forward. Maintain all your previously crawled urls in python dict. So when you try to try them next time, see if that url is there in the dict. else crawl.

    def load_urls(prev_urls):
        prev = dict()
        for url in prev_urls:
            prev[url] = True
        return prev
    
    def fresh_crawl(prev_urls, new_urls):
        for url in new_urls:
            if url not in prev_urls:
                crawl(url)
        return
    
    def main():
        purls = load_urls(prev_urls)
        fresh_crawl(purls, nurls)
        return
    

    The above code was typed in SO text editor aka browser. Might have syntax errors. You might also need to make a few changes. But the logic is there...

    NOTE: But beware that some websites constantly keep changing their content. So sometimes you might have to recrawl a particular webpage (i.e. same url) just to get the updated content.

    0 讨论(0)
  • 2020-12-05 08:49

    I think jama22's answer is a little incomplete.

    In the snippet if self.FILTER_VISITED in x.meta:, you can see that you require FILTER_VISITED in your Request instance in order for that request to be ignored. This is to ensure that you can differentiate between links that you want to traverse and move around and item links that well, you don't want to see again.

    0 讨论(0)
  • 2020-12-05 08:56

    Scrapy can auto-filter urls which are scraped, isn't it? Some different urls point to the same page will not be filtered, such as "www.xxx.com/home/" and "www.xxx.com/home/index.html".

    0 讨论(0)
提交回复
热议问题