getting Forbidden by robots.txt: scrapy

前端 未结 3 1706
忘了有多久
忘了有多久 2020-12-01 05:58

while crawling website like https://www.netflix.com, getting Forbidden by robots.txt: https://www.netflix.com/>

ERROR: No response downloaded for: https://www.netfli

相关标签:
3条回答
  • 2020-12-01 06:29

    First thing you need to ensure is that you change your user agent in the request, otherwise default user agent will be blocked for sure.

    0 讨论(0)
  • 2020-12-01 06:36

    Netflix's Terms of Use state:

    You also agree not to circumvent, remove, alter, deactivate, degrade or thwart any of the content protections in the Netflix service; use any robot, spider, scraper or other automated means to access the Netflix service;

    They have their robots.txt set up to block web scrapers. If you override the setting in settings.py to ROBOTSTXT_OBEY=False then you are violating their terms of use which can result in a law suit.

    0 讨论(0)
  • 2020-12-01 06:39

    In the new version (scrapy 1.1) launched 2016-05-11 the crawl first downloads robots.txt before crawling. To change this behavior change in your settings.py with ROBOTSTXT_OBEY

    ROBOTSTXT_OBEY = False
    

    Here are the release notes

    0 讨论(0)
提交回复
热议问题