How do I save a web page, programatically?

旧巷老猫 提交于 2019-12-19 07:11:16

问题


I would like to save a web page programmatically.

I don't mean merely save the HTML. I would also like automatically to store all associated files (images, CSS files, maybe embedded SWF, etc), and hopefully rewrite the links for local browsing.

The intended usage is a personal bookmarks application, in which link content is cached in case the original copy is taken down.


回答1:


Take a look at wget, specifically the -p flag

−p  −−page−requisites
This option causes Wget to download all the files
that are necessary to properly display
a givenHTML  page. Thisincludes such
things as inlined images, sounds, and
referenced stylesheets.

The following command:

wget -p http://<site>/1.html

Will download page.html and all files it requires.




回答2:


On Windows: you can run IE as a com object and pull everything out.

On other thing, you can take the source of Mozilla.

In Java, Lobo.

Or commons-httpclient and write a lot of code.




回答3:


You could try the MHTML format (which is what IE uses). http://en.wikipedia.org/wiki/MHTML

In other words, you'd be downloading each object (image, css, etc.) to your computer, and then "embedding" them, via Base64, into a single file.



来源:https://stackoverflow.com/questions/1732318/how-do-i-save-a-web-page-programatically

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!