How to get all webpages on a domain

梦想的初衷 提交于 2019-12-11 06:31:43

问题


I am making a simple web spider and I was wondering if there is a way that can be triggered in my PHP code that I can get all the webpages on a domain...

e.g Lets say I wanted to get all the webpages on Stackoverflow.com . That means that it would get: https://stackoverflow.com/questions/ask pulling webpages from an adult site -- how to get past the site agreement? https://stackoverflow.com/questions/1234214/ Best Rails HTML Parser

And all the links. How can I get that. Or is there an API or DIRECTORY that can enable me to get that?

Also is there a way I can get all the subdomains?

Btw how do crawlers crawl websites that don't have SiteMaps or Syndication feeds?

Cheers.


回答1:


If a site wants you to be able to do this, they will probably provide a Sitemap. Using a combination of a sitemap and following the links on pages, you should be able to traverse all the pages on a site - but this is really up to the owner of the site, and how accessible they make it.

If the site does not want you to do this, there is nothing you can do to work around it. HTTP does not provide any standard mechanism for listing the contents of a directory.




回答2:


You would need to hack the server sorry.

What you can do is that, if you own the domain www.my-domain.com, you can put a PHP file there, that you use as a request on demand file. That php file you will need to code some sort of code in that can look at the Folders FTP Wise. PHP can connect to a FTP server, so thats a way to go :)

http://dk1.php.net/manual/en/book.ftp.php

You can with PHP read the dirs folders and return that as an array. Best i can do.




回答3:


As you have said, you must follow all the links.

To do this, you must start by retrieving stackoverflow.com, easy: file_get_contents ("http:\\stackoverflow.com").

Then parse its contents, looking for links: <a href="question/ask">, not so easy.

You store those new URL's in a database and then parse that those after, which will give you a whole new set of URL's, parse those. Soon enough you'll have the vast majority of the site's content, including stuff like sub1.stackoverflow.com. This is called crawling, and it is quite simple to implement, although not so simple to retrieve useful information once you have all that data.

If you are only interested in one particular domain, be sure to dismiss links to external sites.




回答4:


No, not the way you are asking.

However, provided you have a clear goal in mind, you may be able to:

  • use a "primary" request to get the objects of interest. Some sites provide JSON, XML, ... apis to list such objects (e.g SO can list questions this way). Then use "per-object" requests to fetch information specific to one object

  • fetch information from other open (or paid) sources, e.g. search engines, directories, "forensic" tools such as SpyOnWeb

  • reverse engineer the structure of the site, e.g. you know that /item/<id> gets you to the page of item whose ID is <id>

  • ask the webmaster

Please note that some of these solutions may be in violation of the site's termes of use. Anyway these are just pointers, on top of my head.




回答5:


You can use WinHTTPTack/. But it is a polite not to hammer other peoples web sites.

I just use it to find broken links and make a snap shot.

If you do start hammering other peoples sites they will take measures. Some of them will not be nice (i.e. hammer yours).

Just be polite.



来源:https://stackoverflow.com/questions/13922335/how-to-get-all-webpages-on-a-domain

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!