修改nginx.conf,禁止网络爬虫的ua,返回403
添加agent_deny.conf配置文件
#禁止Scrapy等工具的抓取 if ($http_user_agent ~* (Scrapy|Curl|HttpClient)) { return 403; } #禁止指定UA及UA为空的访问 if ($http_user_agent ~ "FeedDemon|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|YisouSpider|HttpClient|MJ12bot|heritrix|EasouSpider|LinkpadBot|Ezooms|^$" ) { return 403; } #禁止非GET|HEAD|POST方式的抓取 if ($request_method !~ ^(GET|HEAD|POST)$) { return 403; }
在网站相关配置文件中插入代码“include agent_deny.conf ;”。
location ~ [^/]\.php(/|$) { try_files $uri =404; fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_index index.php; include fastcgi.conf; include agent_deny.conf; }
重新加载nginx
/etc/init.d/nginx reload
另外从apache中直接禁掉所有蜘蛛的抓取可以用以下配置
BrowserMatch "Spider" bad_bot
来源:https://www.cnblogs.com/enumx/p/12299956.html