My current solution would be find
, but this takes far too long when there are more than 10000 results. Is there no f
Why not
find <expr> | wc -l
as a simple portable solution? Your original solution is spawning a new process printf
for every individual file found, and that's very expensive (as you've just found).
Note that this will overcount if you have filenames with newlines embedded, but if you have that then I suspect your problems run a little deeper.
Try this instead (require find
's -printf
support):
find <expr> -type f -printf '.' | wc -c
It will be more reliable and faster than counting the lines.
Note that I use the find
's printf
, not an external command.
Let's bench a bit :
$ ls -1
a
e
l
ll.sh
r
t
y
z
My snippet benchmark :
$ time find -type f -printf '.' | wc -c
8
real 0m0.004s
user 0m0.000s
sys 0m0.007s
With full lines :
$ time find -type f | wc -l
8
real 0m0.006s
user 0m0.003s
sys 0m0.000s
So my solution is faster =) (the important part is the real
line)
This is my countfiles
function in my ~/.bashrc
(it's reasonably fast, should work for Linux & FreeBSD find
, and does not get fooled by file paths containing newline characters; the final wc
just counts NUL bytes):
countfiles ()
{
command find "${1:-.}" -type f -name "${2:-*}" -print0 |
command tr -dc '\0' | command wc -c;
return 0
}
countfiles
countfiles ~ '*.txt'
This solution is certainly slower than some of the other find -> wc
solutions here, but if you were inclined to do something else with the file names in addition to counting them, you could read
from the find
output.
n=0
while read -r -d ''; do
((n++)) # count
# maybe perform another act on file
done < <(find <expr> -print0)
echo $n
It is just a modification of a solution found in BashGuide that properly handles files with nonstandard names by making the find
output delimiter a NUL byte using print0
, and reading from it using ''
(NUL byte) as the loop delimiter.