问题
I am using Python.org version 2.7 64 bit on Windows Vista 64 bit. I have the following Scrapy code where the way I have defined SgmlLinkExtractor is not crawling the site correctly:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time
class ExampleSpider(CrawlSpider):
name = "goal3"
allowed_domains = ["whoscored.com"]
start_urls = ["http://www.whoscored.com"]
download_delay = 1
#rules = [Rule(SgmlLinkExtractor(allow=()),
#follow=True),
#Rule(SgmlLinkExtractor(allow=()), callback='parse_item')
#]
rules = [
Rule(
SgmlLinkExtractor(allow=('Regions/252/Tournaments/2',)),
callback='parse_item',
follow=True,
)
]
def parse_item(self,response):
self.log('A response from %s just arrived!' % response.url)
scripts = response.selector.xpath("normalize-space(//title)")
for scripts in scripts:
body = response.xpath('//p').extract()
body2 = "".join(body)
print remove_tags(body2).encode('utf-8')
execute(['scrapy','crawl','goal3'])
I've tried a few different versions of what the SgmlLinkExtractor is defined as, yet all seem to be getting printing to Command Shell is the following:
Contact Us | About Us | Glossary | Privacy Policy | WhoScored Ratings
Copyright ┬® 2014 WhoScored.com
2014-07-20 00:14:38+0100 [goal3] DEBUG: Filtered duplicate request: <GET http://www.whoscored.com/Statistics> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2014-07-20 00:14:40+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Statistics/Teams> (referer: http://www.whoscored.com/Statistics)
2014-07-20 00:14:40+0100 [goal3] DEBUG: A response from http://www.whoscored.com/Statistics/Teams just arrived!
Contact Us | About Us | Glossary | Privacy Policy | WhoScored Ratings
Copyright ┬® 2014 WhoScored.com
2014-07-20 00:14:41+0100 [goal3] DEBUG: Redirecting (302) to <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/3> from <GET http://www.whoscored.com/Statistics/3>
2014-07-20 00:14:42+0100 [goal3] DEBUG: Redirecting (302) to <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/2> from <GET http://www.whoscored.com/Statistics/2>
2014-07-20 00:14:43+0100 [goal3] DEBUG: Redirecting (302) to <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/1> from <GET http://www.whoscored.com/Statistics/1>
2014-07-20 00:14:45+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/3> (referer: http://www.whoscored.com/Statistics/Teams)
2014-07-20 00:14:45+0100 [goal3] DEBUG: A response from http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/3 just arrived!
2014-07-20 00:14:46+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/2> (referer: http://www.whoscored.com/Statistics/Teams)
2014-07-20 00:14:46+0100 [goal3] DEBUG: A response from http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/2 just arrived!
2014-07-20 00:14:47+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/1> (referer: http://www.whoscored.com/Statistics/Teams)
2014-07-20 00:14:47+0100 [goal3] DEBUG: A response from http://www.whoscored.com/404.html?aspxerrorpath=/Statistics/1 just arrived!
Can anyone see anything obvious here as to why this is not working?
Thanks
来源:https://stackoverflow.com/questions/24845923/sgmllinkextractor-allow-definition-not-working-with-scrapy