问题
I am currently trying to extract the webgraph structure during my crawling run with Apache Nutch 1.13 and Solr 4.10.4.
According to the documentation, the index-links plugin adds outlinks
and inlinks
to the collection.
I have changed my collection in Solr accordingly (passed the respective fields in schema.xml
and restarted Solr), as well as adapted the solr-mapping file, but to no avail.
The resulting error can be seen below.
bin/nutch index -D solr.server.url=http://localhost:8983/solr/collection1 crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/* -filter -normalize -deleteGone
Segment dir is complete: crawl/segments/20170503114357.
Indexer: starting at 2017-05-03 11:47:02
Indexer: deleting gone documents: true
Indexer: URL filtering: true
Indexer: URL normalizing: true
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance
solr.zookeeper.hosts : URL of the Zookeeper quorum
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
Indexing 1/1 documents
Deleting 0 documents
Indexing 1/1 documents
Deleting 0 documents
Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:147)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:230)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:239)
Interestingly, my own research led me to the assumption that it is in fact non-trivial, since the resulting parse (without the plugin) looks like this:
bin/nutch indexchecker http://www.my-domain.com/
fetching: http://www.my-domain.com/
robots.txt whitelist not configured.
parsing: http://www.my-domain.com/
contentType: application/xhtml+xml
tstamp : Wed May 03 11:40:57 CEST 2017
digest : e549a51553a0fb3385926c76c52e0d79
host : http://www.my-domain.com/
id : http://www.my-domain.com/
title : Startseite
url : http://www.my-domain.com/
content : bla bla bla bla.
Yet, once I enable index-links
, the output suddenly looks like this:
bin/nutch indexchecker http://www.my-domain.com/
fetching: http://www.my-domain.com/
robots.txt whitelist not configured.
parsing: http://www.my-domain.com/
contentType: application/xhtml+xml
tstamp : Wed May 03 11:40:57 CEST 2017
outlinks : http://www.my-domain.com/2-uncategorised/331-links-administratives
outlinks : http://www.my-domain.com/2-uncategorised/332-links-extern
outlinks : http://www.my-domain.com/impressum.html
id : http://www.my-domain.com/
title : Startseite
url : http://www.my-domain.com/
content : bla bla bla
Obviously, this cannot fit into a single field, but I just want to have a single list with all the outlinks
(I have read that the inlinks
do not work, but I do not need them anyways).
回答1:
You have to specify the fields in the solrindex-mapping.xml
like this
<field dest="inlinks" source="inlinks"/>
<field dest="outlinks" source="outlinks"/>
Afterwards, make sure to unload and reload the collection, including a complete restart of Solr.
You did not specify how exactly you implemented the fields in schema.xml
, but for me the following worked:
<!-- fields for index-links plugin -->
<field name="inlinks" type="url" stored="true" indexed="false" multiValued="true"/>
<field name="outlinks" type="url" stored="true" indexed="false" multiValued="true"/>
Best regards and good luck!
来源:https://stackoverflow.com/questions/43757606/nutch-1-13-index-links-configuration