Can GraphDB load 10 million statements with OWL reasoning?

给你一囗甜甜゛ 提交于 2019-12-01 23:44:32

@MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.

http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html

Just typing out @Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.

All of these queries also require the following prefix definition

prefix sys: <http://www.ontotext.com/owlim/system#>

This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.

Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.

SELECT ?state ?ruleset { ?state sys:listRulesets ?ruleset }

Add a predefined ruleset

INSERT DATA { _:b sys:addRuleset "rdfsplus-optimized" }

Make the new ruleset the default

INSERT DATA { _:b sys:defaultRuleset "rdfsplus-optimized" }

Re-infer... could take a long time!

INSERT DATA { [] <http://www.ontotext.com/owlim/system#reinfer> [] }

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!