I\'ve got a tool that outputs a JSON record on each line, and I\'d like to process it with jq
.
The output looks something like this:
{\"
As @JeffMercado pointed out, jq handles streams of JSON just fine, but if you use group_by
, then you'd have to ensure its input is an array. That could be done in this case using the -s
command-line option; if your jq has the inputs
filter, then it can also be done using that filter in conjunction with the -n
option.
If you have a version of jq with inputs
(which is available in jq 1.5), however, then a better approach would be to use the following streaming variant of group_by
:
# sort-free stream-oriented variant of group_by/1
# f should always evaluate to a string.
# Output: a stream of arrays, one array per group
def GROUPS_BY(stream; f): reduce stream as $x ({}; .[$x|f] += [$x] ) | .[] ;
Usage example: GROUPS_BY(inputs; .id)
Note that you will want to use this with the -n
command line option.
Such a streaming variant has two main advantages:
group_by/1
.Please note that the above definition of GROUPS_BY/2
follows the convention for such streaming filters in that it produces a stream. Other variants are of course possible.
The following illustrates how to economize on memory. Suppose the task is to produce a frequency count of .id values. The humdrum solution would be:
GROUPS_BY(inputs; .id) | [(.[0]|.id), length]
A more economical and indeed far better solution would be:
GROUPS_BY(inputs|.id; .) | [.[0], length]
Use the --slurp
(or -s
) switch:
./tool | jq --slurp 'group_by(.id)'
It outputs the following:
[
[
{
"ts": "2017-08-15T21:20:47.029Z",
"id": "123",
"elapsed_ms": 10
}
],
[
{
"ts": "2017-08-15T21:20:47.044Z",
"id": "456",
"elapsed_ms": 13
}
]
]
...which you can then process further. For example:
./tool | jq -s 'group_by(.id) | map({id: .[0].id, count: length})'