I was trying to use sed to count all the lines based on a particular extension.
find -name \'*.m\' -exec wc -l {} \\; | sed ...
I was tryi
You can cat
all files through a single wc
instance to get the total number of lines:
find . -name '*.m' -exec cat {} \; | wc -l
you could use sed also for counting lines in place of wc:
find . -name '*.m' -exec sed -n '$=' {} \;
where '$='
is a "special variable" that keep the count of lines
EDIT
you could also try something like sloccount
Hm, solution with cat may be problematic if you have many files, especially big ones.
Second solution doesn't give total, just lines per file, as I tested.
I'll prefer something like this:
find . -name '*.m' | xargs wc -l | tail -1
This will do the job fast, no matter how many and how big files you have.
On modern GNU platforms wc and find take -print0 and -files0-from parameters that can be combined into a command that count lines in files with total at the end. Example:
find . -name '*.c' -type f -print0 | wc -l --files0-from=-
Most of the answers here won't work well for a large number of files. Some will break if the list of file names is too long for a single command line call, others are inefficient because -exec
starts a new process for every file. I believe a robust and efficient solution would be:
find . -type f -name "*.m" -print0 | xargs -0 cat | wc -l
Using cat
in this way is fine, as its output is piped straight into wc
so only a small amount of the files' content is kept in memory at once. If there are too many files for a single invocation of cat
, cat
will be called multiple times, but all the output will still be piped into a single wc
process.
You may also get the nice formatting from wc with :
wc `find -name '*.m'`