How to parse Wikipedia XML with PHP?

走远了吗. 提交于 2019-12-17 19:29:46

问题


How to parse Wikipedia XML with PHP? I tried it with simplepie, but I got nothing. Here is a link which I want to get its data.

http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gaplimit=2&gapfilterredir=nonredirects&gapfrom=Re&prop=revisions&rvprop=content&format=xml

Edit code:

<?php
    define("EMAIL_ADDRESS", "youlichika@hotmail.com"); 
    $ch = curl_init(); 
    $cv = curl_version(); 
    $user_agent = "curl ${cv['version']} (${cv['host']}) libcurl/${cv['version']} ${cv['ssl_version']} zlib/${cv['libz_version']} <" . EMAIL_ADDRESS . ">"; 
    curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); 
    curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt"); 
    curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt"); 
    curl_setopt($ch, CURLOPT_ENCODING, "deflate, gzip, identity"); 
    curl_setopt($ch, CURLOPT_HEADER, FALSE); 
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); 
    curl_setopt($ch, CURLOPT_HTTPGET, TRUE); 
    curl_setopt($ch, CURLOPT_URL, "http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gaplimit=2&gapfilterredir=nonredirects&gapfrom=Re&prop=revisions&rvprop=content&format=xml"); 
    $xml = curl_exec($ch); 
    $xml_reader = new XMLReader(); 
    $xml_reader->xml($xml, "UTF-8"); 
    echo $xml->api->query->pages->page->rev;
?>

回答1:


I generally use a combination of CURL and XMLReader to parse XML generated by the MediaWiki API.

Note that you must include your e-mail address in the User-Agent header, or else the API script will respond with HTTP 403 Forbidden.

Here is how I initialize the CURL handle:

define("EMAIL_ADDRESS", "my@email.com");
$ch = curl_init();
$cv = curl_version();
$user_agent = "curl ${cv['version']} (${cv['host']}) libcurl/${cv['version']} ${cv['ssl_version']} zlib/${cv['libz_version']} <" . EMAIL_ADDRESS . ">";
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt");
curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt");
curl_setopt($ch, CURLOPT_ENCODING, "deflate, gzip, identity");
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);

You can then use this code which grabs the XML and constructs a new XMLReader object in $xml_reader:

curl_setopt($ch, CURLOPT_HTTPGET, TRUE);
curl_setopt($ch, CURLOPT_URL, "http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gaplimit=2&gapfilterredir=nonredirects&gapfrom=Re&prop=revisions&rvprop=content&format=xml");
$xml = curl_exec($ch);
$xml_reader = new XMLReader();
$xml_reader->xml($xml, "UTF-8");

EDIT: Here is a working example:

<?php
define("EMAIL_ADDRESS", "youlichika@hotmail.com");
$ch = curl_init();
$cv = curl_version();
$user_agent = "curl ${cv['version']} (${cv['host']}) libcurl/${cv['version']} ${cv['ssl_version']} zlib/${cv['libz_version']} <" . EMAIL_ADDRESS . ">"; 
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt");
curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt");
curl_setopt($ch, CURLOPT_ENCODING, "deflate, gzip, identity");
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HTTPGET, TRUE);
curl_setopt($ch, CURLOPT_URL, "http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gaplimit=2&gapfilterredir=nonredirects&gapfrom=Re&prop=revisions&rvprop=content&format=xml"); 
$xml = curl_exec($ch);
$xml_reader = new XMLReader();
$xml_reader->xml($xml, "UTF-8");

function extract_first_rev(XMLReader $xml_reader)
{
    while ($xml_reader->read()) {
        if ($xml_reader->nodeType == XMLReader::ELEMENT) {
            if ($xml_reader->name == "rev") {
                $content = htmlspecialchars_decode($xml_reader->readInnerXML(), ENT_QUOTES);
                return $content;
            }
        } else if ($xml_reader->nodeType == XMLReader::END_ELEMENT) {
            if ($xml_reader->name == "page") {
                throw new Exception("Unexpectedly found `</page>`");
            }
        }
    }

    throw new Exception("Reached the end of the XML document without finding revision content");
}

$latest_rev = array();
while ($xml_reader->read()) {
    if ($xml_reader->nodeType == XMLReader::ELEMENT) {
        if ($xml_reader->name == "page") {
            $latest_rev[$xml_reader->getAttribute("title")] = extract_first_rev($xml_reader);
        }
    }
}

function parse($rev)
{
    global $ch;

    curl_setopt($ch, CURLOPT_HTTPGET, TRUE);
    curl_setopt($ch, CURLOPT_URL, "http://en.wikipedia.org/w/api.php?action=parse&text=" . rawurlencode($rev) . "&prop=text&format=xml");
    sleep(3);
    $xml = curl_exec($ch);
    $xml_reader = new XMLReader();
    $xml_reader->xml($xml, "UTF-8");

    while ($xml_reader->read()) {
        if ($xml_reader->nodeType == XMLReader::ELEMENT) {
            if ($xml_reader->name == "text") {
                $html = htmlspecialchars_decode($xml_reader->readInnerXML(), ENT_QUOTES);
                return $html;
            }
        }
    }

    throw new Exception("Failed to parse");
}

foreach ($latest_rev as $title => $latest_rev) {
    echo parse($latest_rev) . "\n";
}



回答2:


You could use simplexml:

$xml = simplexml_load_file($url);

See example here: http://php.net/manual/en/simplexml.examples-basic.php

Or Dom:

$xml = new DomDocument;
$xml->load($url);

Or XmlReader for huge XML documents that you don't want to read entirely in memory.




回答3:


You should look at the php XMLReader class.



来源:https://stackoverflow.com/questions/4839938/how-to-parse-wikipedia-xml-with-php

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!