Given a file with data like this (i.e. stores.dat file)
sid|storeNo|latitude|longitude
2|1|-28.03720000|153.42921670
9|2|-33.85090000|151.03274200
Based on Cat Kerr response. This command is working on solaris
awk '{print NF; exit}' stores.dat
you may try:
head -1 stores.dat | grep -o \| | wc -l
awk -F'|' '{print NF; exit}' stores.dat
Just quit right after the first line.
Perl solution similar to Mat's awk solution:
perl -F'\|' -lane 'print $#F+1; exit' stores.dat
I've tested this on a file with 1000000 columns.
If the field separator is whitespace (one or more spaces or tabs) instead of a pipe:
perl -lane 'print $#F+1; exit' stores.dat
Under bash, you could simply:
IFS=\| read -ra headline <stores.dat
echo ${#headline[@]}
4
A lot quicker as without forks, and reusable as $headline
hold the full head line. You could, for sample:
printf " - %s\n" "${headline[@]}"
- sid
- storeNo
- latitude
- longitude
Nota This syntax will drive correctly spaces and others characters in column names.
What if some row do contain some extra columns?
This command will search for bigger line, counting separators:
tr -dc $'\n|' <stores.dat |wc -L
3
There are max 3 separators, then 4 fields.