问题
I've got a tool that outputs a JSON record on each line, and I'd like to process it with jq
.
The output looks something like this:
{"ts":"2017-08-15T21:20:47.029Z","id":"123","elapsed_ms":10}
{"ts":"2017-08-15T21:20:47.044Z","id":"456","elapsed_ms":13}
When I pass this to jq
as follows:
./tool | jq 'group_by(.id)'
...it outputs an error:
jq: error (at <stdin>:1): Cannot index string with string "id"
How do I get jq
to handle JSON-record-per-line data?
回答1:
Use the --slurp
(or -s
) switch:
./tool | jq --slurp 'group_by(.id)'
It outputs the following:
[
[
{
"ts": "2017-08-15T21:20:47.029Z",
"id": "123",
"elapsed_ms": 10
}
],
[
{
"ts": "2017-08-15T21:20:47.044Z",
"id": "456",
"elapsed_ms": 13
}
]
]
...which you can then process further. For example:
./tool | jq -s 'group_by(.id) | map({id: .[0].id, count: length})'
回答2:
As @JeffMercado pointed out, jq handles streams of JSON just fine, but if you use group_by
, then you'd have to ensure its input is an array. That could be done in this case using the -s
command-line option; if your jq has the inputs
filter, then it can also be done using that filter in conjunction with the -n
option.
If you have a version of jq with inputs
(which is available in jq 1.5), however, then a better approach would be to use the following streaming variant of group_by
:
# sort-free stream-oriented variant of group_by/1
# f should always evaluate to a string.
# Output: a stream of arrays, one array per group
def GROUPS_BY(stream; f): reduce stream as $x ({}; .[$x|f] += [$x] ) | .[] ;
Usage example: GROUPS_BY(inputs; .id)
Note that you will want to use this with the -n
command line option.
Such a streaming variant has two main advantages:
- it generally requires less memory in that it does not require a copy of the entire input stream to be kept in memory while it is being processed;
- it is potentially faster because it does not require any sort operation, unlike
group_by/1
.
Please note that the above definition of GROUPS_BY/2
follows the convention for such streaming filters in that it produces a stream. Other variants are of course possible.
Handling a large amount of data
The following illustrates how to economize on memory. Suppose the task is to produce a frequency count of .id values. The humdrum solution would be:
GROUPS_BY(inputs; .id) | [(.[0]|.id), length]
A more economical and indeed far better solution would be:
GROUPS_BY(inputs|.id; .) | [.[0], length]
来源:https://stackoverflow.com/questions/45714384/parsing-json-record-per-line-with-jq