marklogic-8

Extracting data from MarkLogic db using Java Client API when fetching one document may be dependent upon other documents

别说谁变了你拦得住时间么 提交于 2019-12-11 05:09:05
问题 I have started working on the MarkLogic database using the Java Client API. I have two use cases for me :- Extract all the documents which are in the same collection and which are in the form of JSON where a particular date is less than or equal to a certain date. JSON is in the form { "id" : "12345" "date" : "2012-12-12", "messageType" : "dummy_type", .... } I am able to do this using the below code :- val queryMgr = client.newQueryManager(); var rawHandle: StringHandle = new StringHandle

Is there any tool to view/edit/delete document of Marklogic

谁说胖子不能爱 提交于 2019-12-11 02:37:20
问题 Is there any tool to view/edit/delete document of Marklogic, like Mongo Management Studio for MongoDb? 回答1: Built into MarkLogic is support for WebDav. You can create a WebDav App Server in the admin console and then any WebDav client can access documents. https://docs.marklogic.com/guide/admin/webdav There are limits to what the webdav protocol supports but this does give basic integration at the document level. The MarkLogic extension to xmlsh includes a Java GUI providing a basic tree

How can I optimize a SPARQL query that returns optional properties?

北城余情 提交于 2019-12-10 18:28:25
问题 How can I optimize a SPARQL query like the following? The intent of this query is: Specify a resource (country resource where countryCode = "US" ) Get back optional properties defined on the resource. Unfortunately, the OPTIONAL blocks are being evaluated before the parent block, which causes the query engine to load all data for all countries. What I want is something like a LEFT OUTER JOIN behavior, but the query engine is not handling it this way. What can I do to improve query performance

MARKLOGIC: Is it possible to use more than 1 columns from a CSV file when generating URI ID during data ingestion in MarkLogic?

有些话、适合烂在心里 提交于 2019-12-02 04:35:46
I am quite new to MarkLogic and I am not sure how to best deal with the challenge I have right now. I have a CSV file exported from a table that will be ingested to MarkLogic database. Now the source table uses 4 columns as its unique primary key combination. In MarkLogic, by default, only one column from CSV file can be used as the URI ID. My question is, is it possible to use more than 1 columns from a CSV file as the URI ID during data ingestion in MarkLogic? If yes, is this feature or setting available in data hub? If it is not possible, what is usually the best practice for this in