Bash script to find the frequency of every letter in a file

て烟熏妆下的殇ゞ 提交于 2019-12-03 04:45:26

Just one awk command

awk -vFS="" '{for(i=1;i<=NF;i++)w[$i]++}END{for(i in w) print i,w[i]}' file

if you want case insensitive, add tolower()

awk -vFS="" '{for(i=1;i<=NF;i++)w[tolower($i)]++}END{for(i in w) print i,w[i]}' file

and if you want only characters,

awk -vFS="" '{for(i=1;i<=NF;i++){ if($i~/[a-zA-Z]/) { w[tolower($i)]++} } }END{for(i in w) print i,w[i]}' file

and if you want only digits, change /[a-zA-Z]/ to /[0-9]/

if you do not want to show unicode, do export LC_ALL=C

My solution using grep, sort and uniq.

grep -o . file | sort | uniq -c

Ignore case:

grep -o . file | sort -f | uniq -ic

A solution with sed, sort and uniq:

sed 's/\(.\)/\1\n/g' file | sort | uniq -c

This counts all characters, not only letters. You can filter out with:

sed 's/\(.\)/\1\n/g' file | grep '[A-Za-z]' | sort | uniq -c

If you want to consider uppercase and lowercase as same, just add a translation:

sed 's/\(.\)/\1\n/g' file | tr '[:upper:]' '[:lower:]' | grep '[a-z]' | sort | uniq -c

Here is a suggestion:

while read -n 1 c
do
    echo "$c"
done < "$INPUT_FILE" | grep '[[:alpha:]]' | sort | uniq -c | sort -nr

Similar to mouviciel's answer above, but more generic for Bourne and Korn shells used on BSD systems, when you don't have GNU sed, which supports \n in a replacement, you can backslash escape a newline:

sed -e's/./&\
/g' file | sort | uniq -c | sort -nr

or to avoid the visual split on the screen, insert a literal newline by type CTRL+V CTRL+J

sed -e's/./&\^J/g' file | sort | uniq -c | sort -nr
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!