Using MediaWiki to pull text from a Wikia page but it comes back in a big mess is there a better way I could do this to pull text from each section?

大兔子大兔子 提交于 2019-12-03 15:40:45

The easiest way, if you don't want to parse the wiki markup yourself, is to retrieve the parsed HTML version of the page and then process it using an HTML parser (like jsoup, as recommended by Hasham).

Besides just scraping the normal wiki user interface (which will give you the page HTML wrapped in the navigation skin), there are two ways of getting the HTML text of a MediaWiki page:

  1. use the API with action=parse, which will return the page HTML wrapped in a MediaWiki API XML (or JSON / YAML / etc.) response, like this:

  2. or use the main index.php script with action=render, which will return just the page HTML:

Ps. Since you mention sections in your question, let me note that the action=parse API module can return information about the sections on the page using prop=sections (or even prop=sections|text). For an example, see this API query:

The content is formatted using wiki syntax. You can render it in HTML using a Java engine called Bliki.

http://code.google.com/p/gwtwiki/

http://code.google.com/p/gwtwiki/wiki/Mediawiki2HTML

Bliki is not thought for Android. You need it to compile it. It seems it can be done:

https://groups.google.com/forum/?fromgroups=#!topic/bliki/LNsmnEEZEV4

If you want to parse the html document then Jsoup is the choice.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!