Print rest of the fields in awk

后端 未结 6 1359
遥遥无期
遥遥无期 2021-02-02 06:46

Suppose we have this data file.

john 32 maketing executive
jack 41 chief technical officer
jim  27 developer
dela 33 assistant risk management officer

相关标签:
6条回答
  • 2021-02-02 07:13

    Approach using awk that would not require gawk or any state mutations:

    awk '{print $1 " " substr($0, index($0, $3));}' datafile
    

    UPD

    solution that is a bit longer, but will stand up the case when $1 or $2 contains $3:

    awk '{print $1 " " substr($0, length($1 $2) + 1);}' data
    

    Or even more robust if you have custom field separator:

    awk '{print $1 " " substr($0, length($1 FS $2 FS) + 1);}' data
    
    0 讨论(0)
  • 2021-02-02 07:16

    Another way is to just use sed to replace the first digits and space match:

    sed 's|[0-9]\+\s\+||' file

    0 讨论(0)
  • Reliably with GNU awk for gensub() when using the default FS:

    $ gawk -v delNr=2 '{$0=gensub("^([[:space:]]*([^[:space:]]+[[:space:]]+){"delNr-1"})[^[:space:]]+[[:space:]]*","\\1","")}1' file
    john maketing executive
    jack chief technical officer
    jim  developer
    dela assistant risk management officer
    

    With other awks, you need to use match() and substr() instead of gensub(). Note that the variable delNr above tells awk which field you want to delete:

    $ gawk -v delNr=3 '{$0=gensub("^([[:space:]]*([^[:space:]]+[[:space:]]+){"delNr-1"})[^[:space:]]+[[:space:]]*","\\1","")}1' file
    john 32 executive
    jack 41 technical officer
    jim  27
    dela 33 risk management officer
    

    Do not do this:

    awk '{sub($2 OFS, "")}1'
    

    as the same text that's in $2 might be at the end of $1, and/or $2 might contain RE metacharacters so there's a very good chance that you'll remove the wrong string that way.

    Do not do this:

    awk '{$2=""}1' file
    

    as it adds an FS and will compress all other contiguous white space between fields into a single blank char each.

    Do not do this:

    awk '{$2="";sub("  "," ")}1' file
    

    as it hasthe space-compression issue mentioned above and relies on a hard-coded FS of a single blank (the default, though, so maybe not so bad) but more importantly if there were spaces before $1 it would remove one of those instead of the space it's adding between $1 and $2.

    One last thing worth mentioning is that in recent versions of gawk there is a new function named patsplit() which works like split() BUT in addition to creating an array of the fields, it also creates an array of the spaces between the fields. What that means is that you can manipulate fields and the spaces between then within the arrays so you don't have to worry about awk recompiling the record using OFS if you manipulate a field. Then you just have to print the fields you want from the arrays. See patsplit() in http://www.gnu.org/software/gawk/manual/gawk.html#String-Functions for more info.

    0 讨论(0)
  • 2021-02-02 07:26

    You can use simple awk like this:

    awk '{$2=""}1' file
    

    However this will have an extra OFS in your output that can be avoided by this awk

    awk '{sub($2 OFS, "")}1' file
    

    OR else by using this tr and cut combo:

    On Linux:

    tr -s ' ' < file | cut -d ' ' -f1,f3-
    

    On OSX:

    tr -s ' ' < file | cut -d ' ' -f1 -f3-
    
    0 讨论(0)
  • 2021-02-02 07:26

    This removes filed #2 and cleans up the extra space.

    awk '{$2="";sub("  "," ")}1' file
    
    0 讨论(0)
  • 2021-02-02 07:32

    Set the field(s) you want to skip to blank:

    awk '{$2 = ""; print $0;}' < file_name
    

    Source: Using awk to print all columns from the nth to the last

    0 讨论(0)
提交回复
热议问题