问题
The Googlebot seems to be crawling up inside my jQuery and creating links ending in /a that don't exist and then reporting them as 404 errors.
http://www.mySite.com/a
The site validates green at the W3C.
The "/a" is coming from inside jQuery itself. Edit: The following is a line of code within jQuery v1.5 and 1.5.2 (the only two I looked inside)
<a href='/a' style='color:red;float:left;opacity:.55;'>a</a>
For now, I'm redirecting it within htaccess before it gets out of hand...
Redirect 301 /a http://www.mysite.com
Does anyone know why/how the Googlebot would go inside jQuery?
EDIT:
I've since blocked the jQuery file with the robots.txt file but I really wasn't expecting the Googlebot to go into external JavaScript files.
EDIT 2:
The following is a response from Google employee JohnMu on this issue in the thread I started at Google Groups. Looks like I'm going to do the 301 after all.
JohnMu
Google Employee
4:39 AM
Hi guys
Just a short note on this -- yes, we are picking up the "/a" link for many sites from jQuery JavaScript. However, that generally isn't a problem, if we see "/a" as being a 404, then that's fine for us. As with other 404-URLs, we'll list it as a crawl error in Webmaster Tools, but again, that's not going to be a problem for crawling, indexing, or ranking. If you want to make sure that it doesn't trigger a crawl error in Webmaster Tools, then I would recommend just 301 redirecting that URL to your homepage (disallowing the URL will also bring it up as a crawl error - it will be listed as a URL disallowed by robots.txt).
I would also recommend not explicitly disallowing crawling of the jQuery file. While we generally wouldn't index it on its own, we may need to access it to generate good Instant Previews for your site.
So to sum it up: If you're seeing "/a" in the crawl errors in Webmaster Tools, you can just leave it like that, it won't cause any problems. If you want to have it removed there, you can do a 301 redirect to your homepage.
Cheers
John
回答1:
It looks like jQuery uses that as a test template to determine browser support for features. I am not sure why this would ever be seen by a google bot, though. I was not aware that web crawlers typically ran any Javascript. That would mean that they are actually functioning as a web browser (which one I wonder?). Seems unlikely.
(Edit - see this: how do web crawlers handle javascript - indicates that google may try to pull some stuff from scripts. Surprised it would not be programmed to recognize something that's part of jQuery, do you use a nonstandard name for the include?)
Alternatively, is there any chance that the header for your jQuery include is not correct? Maybe it's being served with an HTML mime type, which most browsers would probably not care about since they type is also set by the script
include, but maybe a bot would decide to parse.
In any event rather than setting a redirect, why don't you just use robots.txt
? Add this line:
Disallow: /a
You could also try fixing jQuery. Obfuscating the link a little bit would probably do the trick, e.g. change the offending line:
div.innerHTML = " <link/><table></table><"+"a hr"+"ef='/a'"
+" style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>";
If google is smart enough to actually parse string concatenations, which would shock me, you could go one further and assign something like "href" to a variable and then concatenate with that. I can't believe their js scanner would go that far, that would be basically like trying to run it.
来源:https://stackoverflow.com/questions/5749348/jquery-causing-404-errors-in-webmaster-tools-on-a-directory