Can GraphDB load 10 million statements with OWL reasoning?

后端 未结 2 509
栀梦
栀梦 2021-01-23 01:54

I am struggling to load most of the Drug Ontology OWL files and most of the ChEBI OWL files into GraphDB free v8.3 repository with Optimized OWL Horst reasoning on.

相关标签:
2条回答
  • 2021-01-23 02:02

    @MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.

    http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html

    0 讨论(0)
  • 2021-01-23 02:25

    Just typing out @Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.

    All of these queries also require the following prefix definition

    prefix sys: <http://www.ontotext.com/owlim/system#>

    This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.

    Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.

    SELECT ?state ?ruleset { ?state sys:listRulesets ?ruleset }

    Add a predefined ruleset

    INSERT DATA { _:b sys:addRuleset "rdfsplus-optimized" }

    Make the new ruleset the default

    INSERT DATA { _:b sys:defaultRuleset "rdfsplus-optimized" }

    Re-infer... could take a long time!

    INSERT DATA { [] <http://www.ontotext.com/owlim/system#reinfer> [] }

    0 讨论(0)
提交回复
热议问题