twisted.internet.error.DNSLookupError: DNS lookup failed: address \"'http:\" not found: [Errno 11001] getaddrinfo failed.解决办法

烈酒焚心 提交于 2020-01-16 18:48:54
C:\Users\wuzhi_000\Desktop\tutorial>scrapy shell 'http://quotes.toscrape.com'
2016-11-02 14:59:11 [scrapy] INFO: Scrapy 1.2.1 started (bot: tutorial)
2016-11-02 14:59:11 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'LOGSTATS_INTERVAL': 0}
2016-11-02 14:59:11 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-11-02 14:59:12 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-11-02 14:59:12 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-11-02 14:59:12 [scrapy] INFO: Enabled item pipelines:
[]
2016-11-02 14:59:12 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-02 14:59:12 [scrapy] INFO: Spider opened
2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (failed 1 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (failed 2 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] DEBUG: Gave up retrying <GET http://'http:/robots.txt> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] ERROR: Error downloading <GET http://'http:/robots.txt>: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape.com'> (failed 1 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape.com'> (failed 2 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
2016-11-02 14:59:12 [scrapy] DEBUG: Gave up retrying <GET http://'http://quotes.toscrape.com'> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
Traceback (most recent call last):
  File "C:\Python27\Scripts\scrapy-script.py", line 9, in <module>
    load_entry_point('scrapy==1.2.1', 'console_scripts', 'scrapy')()
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\commands\shell.py", line 71, in run
    shell.start(url=url)
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\shell.py", line 47, in start
    self.fetch(url, spider)
  File "c:\python27\lib\site-packages\scrapy-1.2.1-py2.7.egg\scrapy\shell.py", line 112, in fetch
    reactor, self._schedule, request, spider)
  File "c:\python27\lib\site-packages\twisted\internet\threads.py", line 122, in blockingCallFromThread
    result.raiseException()
  File "<string>", line 2, in raiseException
twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.

 

解决办法:

将单引号换成双引号即可:

scrapy shell "http://quotes.toscrape.com"

 

C:\Users\wuzhi_000\Desktop\tutorial>scrapy shell "http://quotes.toscrape.com"
2016-11-02 15:07:29 [scrapy] INFO: Scrapy 1.2.1 started (bot: tutorial)
2016-11-02 15:07:29 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'LOGSTATS_INTERVAL': 0}
2016-11-02 15:07:29 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-11-02 15:07:29 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-11-02 15:07:29 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-11-02 15:07:29 [scrapy] INFO: Enabled item pipelines:
[]
2016-11-02 15:07:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-02 15:07:29 [scrapy] INFO: Spider opened
2016-11-02 15:07:31 [scrapy] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2016-11-02 15:07:31 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.com> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x00000000057B1C18>
[s]   item       {}
[s]   request    <GET http://quotes.toscrape.com>
[s]   response   <200 http://quotes.toscrape.com>
[s]   settings   <scrapy.settings.Settings object at 0x00000000057B1A58>
[s]   spider     <DefaultSpider 'default' at 0x5ae19b0>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser
>>>

 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!