Google's crawler won't understand own maps. How to workaround?

前端 未结 6 1898
我寻月下人不归
我寻月下人不归 2021-02-13 23:18

I found strange words, (have, here, imagery, sorry) that were supposed not to be on my site, being taken as keywords by the crawler from Google

6条回答
  •  野的像风
    2021-02-13 23:35

    This answer won't help you remove the words of the crawled pages, but it might prevent them from being added after the next crawl.

    Since your problem might be related to the crawler not being able to load a valid map. It's not exactly clear why it can't. The map provider might be blocking googlebots.

    Anyway if it's not too hard, I'd have a look here:

    https://support.google.com/webmasters/answer/1061943?hl=en

    Create a list of user agents written here:

    I'll use 'Googlebot' as an example, but you should use a list with every blocked user agents.

    if (navigator.userAgent !== 'Googlebot') {
       // load the map and other stuff
    } else {
       // show a picture where the map should be or do nothing.
    }
    

    Google bot executes JS so it should work preventing errors in case the google bot can't load it.

    One thing you could do is to change your browser's useragent to 'Googlebot' and load your page. If the map provider is preventing any browser with this user agent, you should see exactly what the googlebot sees. The other problem is that googlebot might also have some timeout to prevent loading too much data and it won't load images.

    Adding guards might help preventing google bot to actually load the map if the problem is really in the map.

提交回复
热议问题