问题
have this code who run scrapy crawler from script(http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script). But it doesn't work.
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log,signals
from spiders.egov import EgovSpider
from scrapy.utils.project import get_project_settings
def run():
spider =EgovSpider()
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configured
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()
from apscheduler.schedulers.twisted import TwistedScheduler
sched = TwistedScheduler()
sched.add_job(run, 'interval', seconds=10)
sched.start()
My spider:
import scrapy
class EgovSpider(scrapy.Spider):
name = 'egov'
start_urls = ['http://egov-buryatia.ru/index.php?id=1493']
def parse(self, response):
data = response.xpath("//div[@id='main_wrapper_content_news']//tr//text()").extract()
print data
print response.url
f = open("vac.txt","a")
for d in data:
f.write(d.encode(encoding="UTF-8") + "\n")
f.write(str(now))
f.close()
If i replace line "reactor.run()", spider has started one times after 10 seconds:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log,signals
from spiders.egov import EgovSpider
from scrapy.utils.project import get_project_settings
def run():
spider =EgovSpider()
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configured
crawler.crawl(spider)
crawler.start()
log.start()
from apscheduler.schedulers.twisted import TwistedScheduler
sched = TwistedScheduler()
sched.add_job(run, 'interval', seconds=10)
sched.start()
reactor.run()
I am low experienced with python and english :) Please, help me.
回答1:
I encountered the same problem today. Here is some information.
Twisted reactor can't restart once it runs and stops. You should start a long-runing reactor and add crawler task(s) periodly.
To further simplify code, you can use CrawlerProcess.start(), which includes reactor.run().
from scrapy.crawler import CrawlerProcess
from spiders.egov import EgovSpider
from scrapy.utils.project import get_project_settings
from apscheduler.schedulers.twisted import TwistedScheduler
process = CrawlerProcess(get_project_settings())
sched = TwistedScheduler()
sched.add_job(process.crawl, 'interval', args=[EgovSpider], seconds=10)
sched.start()
process.start(False) # Do not stop reactor after spider closes
来源:https://stackoverflow.com/questions/29765039/how-to-use-apscheduler-with-scrapy