问题
I am using scrapy to crawl old sites that I own, I am using the code below as my spider. I don't mind having files outputted for each webpage, or a database with all the content within that. But I do need to be able to have the spider crawl the whole thing with out me having to put in every single url that I am currently having to do
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["www.example.com"]
start_urls = [
"http://www.example.com/contactus"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
回答1:
To crawl whole site you should use the CrawlSpider instead of the scrapy.Spider
Here's an example
For your purposes try using something like this:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = (
Rule(LinkExtractor(), callback='parse_item', follow=True),
)
def parse_item(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
Also, take a look at this article
来源:https://stackoverflow.com/questions/36837594/get-scrapy-spider-to-crawl-entire-site