Parsing JSON record-per-line with jq?

后端 未结 2 1076
被撕碎了的回忆
被撕碎了的回忆 2021-01-17 13:42

I\'ve got a tool that outputs a JSON record on each line, and I\'d like to process it with jq.

The output looks something like this:

{\"         


        
2条回答
  •  迷失自我
    2021-01-17 14:19

    As @JeffMercado pointed out, jq handles streams of JSON just fine, but if you use group_by, then you'd have to ensure its input is an array. That could be done in this case using the -s command-line option; if your jq has the inputs filter, then it can also be done using that filter in conjunction with the -n option.

    If you have a version of jq with inputs (which is available in jq 1.5), however, then a better approach would be to use the following streaming variant of group_by:

     # sort-free stream-oriented variant of group_by/1
     # f should always evaluate to a string.
     # Output: a stream of arrays, one array per group
     def GROUPS_BY(stream; f): reduce stream as $x ({}; .[$x|f] += [$x] ) | .[] ;
    

    Usage example: GROUPS_BY(inputs; .id)

    Note that you will want to use this with the -n command line option.

    Such a streaming variant has two main advantages:

    1. it generally requires less memory in that it does not require a copy of the entire input stream to be kept in memory while it is being processed;
    2. it is potentially faster because it does not require any sort operation, unlike group_by/1.

    Please note that the above definition of GROUPS_BY/2 follows the convention for such streaming filters in that it produces a stream. Other variants are of course possible.

    Handling a large amount of data

    The following illustrates how to economize on memory. Suppose the task is to produce a frequency count of .id values. The humdrum solution would be:

    GROUPS_BY(inputs; .id) | [(.[0]|.id), length]
    

    A more economical and indeed far better solution would be:

    GROUPS_BY(inputs|.id; .) | [.[0], length]
    

提交回复
热议问题