unix - count of columns in file

前端 未结 11 1809
北海茫月
北海茫月 2020-12-22 19:17

Given a file with data like this (i.e. stores.dat file)

sid|storeNo|latitude|longitude
2|1|-28.03720000|153.42921670
9|2|-33.85090000|151.03274200

相关标签:
11条回答
  • 2020-12-22 19:44

    Based on Cat Kerr response. This command is working on solaris

    awk '{print NF; exit}' stores.dat
    
    0 讨论(0)
  • 2020-12-22 19:45

    you may try:

    head -1 stores.dat | grep -o \|  | wc -l
    
    0 讨论(0)
  • 2020-12-22 19:55
    awk -F'|' '{print NF; exit}' stores.dat 
    

    Just quit right after the first line.

    0 讨论(0)
  • 2020-12-22 19:56

    Perl solution similar to Mat's awk solution:

    perl -F'\|' -lane 'print $#F+1; exit' stores.dat
    

    I've tested this on a file with 1000000 columns.


    If the field separator is whitespace (one or more spaces or tabs) instead of a pipe:

    perl -lane 'print $#F+1; exit' stores.dat
    
    0 讨论(0)
  • 2020-12-22 19:57

    Proper pure bash way

    Under bash, you could simply:

    IFS=\| read -ra headline <stores.dat
    echo ${#headline[@]}
    4
    

    A lot quicker as without forks, and reusable as $headline hold the full head line. You could, for sample:

    printf " - %s\n" "${headline[@]}"
     - sid
     - storeNo
     - latitude
     - longitude
    

    Nota This syntax will drive correctly spaces and others characters in column names.

    Alternative: strong binary checking for max columns on each rows

    What if some row do contain some extra columns?

    This command will search for bigger line, counting separators:

    tr -dc $'\n|' <stores.dat |wc -L
    3
    

    There are max 3 separators, then 4 fields.

    0 讨论(0)
提交回复
热议问题