问题
I got a Wikipedia-Article and I want to fetch the first z lines (or the first x chars, or the first y words, doesn't matter) from the article.
The problem: I can get either the source Wiki-Text (via API) or the parsed HTML (via direct HTTP-Request, eventually on the print-version) but how can I find the first lines displayed? Normaly the source (both html and wikitext) starts with the info-boxes and images and the first real text to display is somewhere down in the code.
For example: Albert Einstein on Wikipedia (print Version). Look in the code, the first real-text-line "Albert Einstein (pronounced /ˈælbərt ˈaɪnstaɪn/; German: [ˈalbɐt ˈaɪ̯nʃtaɪ̯n]; 14 March 1879–18 April 1955) was a theoretical physicist." is not on the start. The same applies to the Wiki-Source, it starts with the same info-box and so on.
So how would you accomplish this task? Programming language is java, but this shouldn't matter.
A solution which came to my mind was to use an xpath query but this query would be rather complicated to handle all the border-cases. [update]It wasn't that complicated, see my solution below![/update]
Thanks!
回答1:
You don't need to.
The API's exintro
parameter returns only the first (zeroth) section of the article.
Example: api.php?action=query&prop=extracts&exintro&explaintext&titles=Albert%20Einstein
There are other parameters, too:
exchars
Length of extracts in characters.exsentences
Number of sentences to return.exintro
Return only zeroth section.exsectionformat
What section heading format to use for plaintext extracts:wiki — e.g., == Wikitext == plain — no special decoration raw — this extension's internal representation
exlimit
Maximum number of extracts to return. Because excerpts generation can be slow, the limit is capped at 20 for intro-only extracts and 1 for whole-page extracts.explaintext
Return plain-text extracts.excontinue
When more results are available, use this parameter to continue.
Source: https://www.mediawiki.org/wiki/Extension:MobileFrontend#prop.3Dextracts
回答2:
I was also in the same need and wrote some Python code to do that.
The script downloads the wikipedia article with given name, parses it using BeautifulSoup and returns first few paragraphs.
Code is at http://github.com/anandology/sandbox/blob/master/wikisnip/wikisnip.py.
回答3:
Wikipedia offers an Abstracts download. While this is quite a large file (currently 2.5GB
), it offers exactly the info you want, for all articles.
回答4:
You need a parser that can read Wikipedia markup. Try WikiText or the parsers that come with XWiki.
That will allow you to ignore anything you don't want (headlines, tables).
回答5:
I opened the Albert Einstein article in Firefox and I clicked on View source. It's pretty easy to parse using an HTML parser. You should focus on the <p>
and strip the other html from within it.
回答6:
For example if you have the result in a string you would find the text:
<div id="bodyContent">
and after that index you would find the first
<p>
that would be the index of the first paragraph you mentioned.
try this url Link to the content (just works in the browser)
回答7:
Well, when using the Wiki source itself you could just strip out all templates at the start. This might work well enough for most articles that have infoboxes or some messages at the top.
However, some articles might put the starting blurb into a template itself so that would be a little difficult there.
Another way, perhaps more reliable, would be to take the contents of the first <p>
tag that appears directly in the article text (so not nested in a table or so). This should strip out infoboxes and other stuff at the start as those are probably (I'm not exactly sure) <table>
s or <div>
s.
Generally, Wikipedia is written for human consumption with only very minimal support for anything semantic. That makes automatic extraction of specific information from the articles pretty painful.
回答8:
As you expect, you will probably have to end up parsing the source, the compiled HTML, or both. However, the Wikipedia:Lead_section may give you some indication of what to expect in well-written articles.
回答9:
I worked out the following solution: Using a xpath-query on the XHTML-Source-Code (I took the print-version because it is shorter, but it also works on the normal version).
//html/body//div[@id='bodyContent']/p[1]
This works on German and on English Wikipedia and I haven't found an article where it doesn't output the first paragraph. The solution is also quite fast, I also thought of only taking the first x chars of the xhtml, but this would render the xhtml invalid.
If someone is searching for the JAVA-Code here it is then:
private static DocumentBuilderFactory dbf;
static {
dbf = DocumentBuilderFactory.newInstance();
dbf.setAttribute("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
}
private static XPathFactory xpathf = XPathFactory.newInstance();
private static String xexpr = "//html/body//div[@id='bodyContent']/p[1]";
private static String getPlainSummary(String url) {
try {
// OPen Wikipage
URL u = new URL(url);
URLConnection uc = u.openConnection();
uc.setRequestProperty("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1) Gecko/20090616 Firefox/3.5");
InputStream uio = uc.getInputStream();
InputSource src = new InputSource(uio);
//Construct Builder
DocumentBuilder builder = dbf.newDocumentBuilder();
Document docXML = builder.parse(src);
//Apply XPath
XPath xpath = xpathf.newXPath();
XPathExpression xpathe = xpath.compile(xexpr);
String s = xpathe.evaluate(docXML);
//Return Attribute
if (s.length() == 0) {
return null;
} else {
return s;
}
}
catch (IOException ioe) {
logger.error("Cant get XML", ioe);
return null;
}
catch (ParserConfigurationException pce) {
logger.error("Cant get DocumentBuilder", pce);
return null;
}
catch (SAXException se) {
logger.error("Cant parse XML", se);
return null;
}
catch (XPathExpressionException xpee) {
logger.error("Cant parse XPATH", xpee);
return null;
}
}
use it by calling getPlainSummary("http://de.wikipedia.org/wiki/Uma_Thurman");
来源:https://stackoverflow.com/questions/1565347/get-first-lines-of-wikipedia-article