I have a MongoDB collection, whose docs use several levels of nesting, from which I would like to extract a multidimensional array compiled from a subset of their fields. I have
The "chunking" comes from your code: your reduce function's values parameter can contain either {time:<timestamp>,value:<value>}
emitted from your map function, or {time:[<timestamps>],value:[<values]}
returned from a previous call to your reduce function.
I don't know if it will happen in practice, but it can happen in theory.
Simply have your map function emit the same kind of objects that your reduce function returns, i.e. emit(<id>, {time: [ts], value: [P[1]]})
, and change your reduce function accordingly, i.e. Array.push.apply(result.time, V.time)
and similarly for result.value
.
Well I actually don't understand why you're not using an array of time/value pairs, instead of a pair of arrays, i.e. emit(<id>, { pairs: [ {time: ts, value: P[1] ] })
or emit(<id>, { pairs: [ [ts, P[1]] ] })
in the map function, and Array.push.apply(result.pairs, V.pairs)
in the reduce function. That way, you won't even need the finalize function (except maybe to "unwrap" the array from the pairs property: because the reduce function cannot return an array, your have to wrap it that way in an object)