How to process large number of documents in chunk to avoid expanded tree cache full
问题 I have one entity in MarkLogic under which around 98k+ documents ( /someEntity/[ID].xml ) are present and I have one situation in which I have to add a few new tags in all those documents. I prepared a query to do add child node and then try to run against that entity receiving expanded tree cache full. I increased cache memory to few more gigs and it works and takes a long time to complete. Also tried with xdmp:clear-expanded-tree-cache() and it also won't work. Any pointers how we can fetch