I am trying to find out the frequency of appearance of every letter in the english alphabet in an input file. How can I do this in a bash script?
Similar to mouviciel's answer above, but more generic for Bourne and Korn shells used on BSD systems, when you don't have GNU sed, which supports \n in a replacement, you can backslash escape a newline:
sed -e's/./&\
/g' file | sort | uniq -c | sort -nr
or to avoid the visual split on the screen, insert a literal newline by type CTRL+V CTRL+J
sed -e's/./&\^J/g' file | sort | uniq -c | sort -nr
My solution using grep
, sort
and uniq
.
grep -o . file | sort | uniq -c
Ignore case:
grep -o . file | sort -f | uniq -ic
A solution with sed
, sort
and uniq
:
sed 's/\(.\)/\1\n/g' file | sort | uniq -c
This counts all characters, not only letters. You can filter out with:
sed 's/\(.\)/\1\n/g' file | grep '[A-Za-z]' | sort | uniq -c
If you want to consider uppercase and lowercase as same, just add a translation:
sed 's/\(.\)/\1\n/g' file | tr '[:upper:]' '[:lower:]' | grep '[a-z]' | sort | uniq -c
Here is a suggestion:
while read -n 1 c
do
echo "$c"
done < "$INPUT_FILE" | grep '[[:alpha:]]' | sort | uniq -c | sort -nr
Just one awk command
awk -vFS="" '{for(i=1;i<=NF;i++)w[$i]++}END{for(i in w) print i,w[i]}' file
if you want case insensitive, add tolower()
awk -vFS="" '{for(i=1;i<=NF;i++)w[tolower($i)]++}END{for(i in w) print i,w[i]}' file
and if you want only characters,
awk -vFS="" '{for(i=1;i<=NF;i++){ if($i~/[a-zA-Z]/) { w[tolower($i)]++} } }END{for(i in w) print i,w[i]}' file
and if you want only digits, change /[a-zA-Z]/
to /[0-9]/
if you do not want to show unicode, do export LC_ALL=C