Print previous line if condition is met

后端 未结 4 1054

I would like to grep a word and then find the second column in the line and check if it is bigger than a value. Is yes, I want to print the previous line.

Ex:

In

相关标签:
4条回答
  • 2021-02-08 10:49

    This can be a way:

    $ awk '$1=="BB" && $2>1 {print f} {f=$1}' file
    AAAAAAAAAAAAA
    

    Explanation

    • $1=="BB" && $2>1 {print f} if the 1st field is exactly BB and 2nd field is bigger than 1, then print f, a stored value.
    • {f=$1} store the current line in f, so that it is accessible when reading the next line.
    0 讨论(0)
  • 2021-02-08 10:52

    Another option: reverse the file and print the next line if the condition matches:

    tac file | awk '$1 == "BB" && $2 > 1 {getline; print}' | tac
    
    0 讨论(0)
  • 2021-02-08 11:03

    Concerning generality

    I think it needs to be mentioned that the most general solution to this class of problem involves two passes:

    • the first pass to add a decimal row number ($REC) to the front of each line, effectively grouping lines into records by $REC
    • the second pass to trigger on the first instance of each new value of $REC as a record boundary (resetting $CURREC), thereafter rolling along in the native AWK idiom concerning the records to follow matching $CURREC.

    In the intermediate file, some sequence of decimal digits followed by a separator (for human reasons, typically an added tab or space) is parsed (aka conceptually snipped off) as out-of-band with respect to the baseline file.

    Command line paste monster

    Even confined to the command line, it's an easy matter to ensure that the intermediate file never hits disk. You just need to use an advanced shell such as ZSH (my own favourite) which supports process substitution:

    paste <( <input.txt awk "BEGIN { R=0; N=0; } /Header pattern/ { N=1; } { R=R+N; N=0; print R; }" ) input.txt | awk -f yourscript.awk 
    

    Let's render that one-liner more suitable for exposition:

    P="/Header pattern/"
    X="BEGIN { R=0; N=0; } $P { N=1; } { R=R+N; N=0; print R; }"
    paste <( <input.txt awk $X ) input.txt | awk -f yourscript.awk 
    

    This starts three processes: the trivial inline AWK script, paste, and the AWK script you really wanted to run in the first place.

    Behind the scenes, the <() command line construct creates a named pipe and passes the pipe name to paste as the name of its first input file. For paste's second input file, we give it the name of our original input file (this file is thus read sequentially, in parallel, by two different processes, which will consume between them at most one read from disk, if the input file is cold).

    The magic named pipe in the middle is an in-memory FIFO that ancient Unix probably managed at about 16 kB of average size (intermittently pausing the paste process if the yourscript.awk process is sluggish in draining this FIFO back down).

    Perhaps modern Unix throws a bigger buffer in there because it can, but it's certainly not a scarce resource you should be concerned about, until you write your first truly advanced command line with process redirection involving these by the hundreds or thousands :-)

    Additional performance considerations

    On modern CPUs, all three of these processes could easily find themselves running on separate cores.

    The first two of these processes border on the truly trivial: an AWK script with a single pattern match and some minor bookkeeping, paste called with two arguments. yourscript.awk will be hard pressed to run faster than these.

    What, your development machine has no lightly loaded cores to render this master shell-master solution pattern almost free in the execution domain?

    Ring, ring.

    Hello?

    Hey, it's for you. 2018 just called, and wants its problem back.

    2020 is officially the reprieve of MTV: That's the way we like it, magic pipes for nothing and cores for free. Not to name out loud any particular TLA chip vendor who is rocking the space these days.

    As a final performance consideration, if you don't want the overhead of parsing actual record numbers:

    X="BEGIN { N=0; } $P { N=1; } { print N; N=0; }"
    

    Now your in-FIFO intermediate file is annotated with just an additional two characters prepended to each line ('0' or '1' and the default separator character added by paste), with '1' demarking first line in record.

    Named FIFOs

    Under the hood, these are no different than the magic FIFOs instantiated by Unix when you write any normal pipe command:

    cat file | proc1 | proc2 | proc2 
    

    Three unnamed pipes (and a whole process devoted to cat you didn't even need).

    It's almost unfortunate that the truly exceptional convenience of the default stdin/stdout streams as premanaged by the shell obscures the reality that paste $magictemppipe1 $magictemppipe2 bears no additional performance considerations worth thinking about, in 99% of all cases.

    "Use the <() Y-joint, Luke."

    Your instinctive reflex toward natural semantic decomposition in the problem domain will herewith benefit immensely.

    If anyone had had the wits to name the shell construct <() as the YODA operator in the first place, I suspect it would have been pressed into universal service at least a solid decade ago.

    0 讨论(0)
  • 2021-02-08 11:15

    Combining sed & awk you get this: sed 'N;s/\n/ /' < file |awk '$3>1{print $1}'

    sed 'N;s/\n/ / : Combine 1st and 2nd line and replace next line char with space

    awk '$3>1{print $1}': print $1(1st column) if $3(3rd column's value is > 1)

    0 讨论(0)
提交回复
热议问题