I have split a large text file into a number of sets of smaller ones for performance testing that i\'m doing. There are a number of directories like this:
/h
For this kind of thing I always use find together with xargs:
$ find output-* -name "*.chunk.??" | xargs -I{} ./myexecutable -i {} -o {}.processed
Now since your script processes only one file at a time, using -exec (or -execdir) directly with find, as already suggested, is just as efficient, but I'm used to using xargs, as that's generally much more efficient when feeding a command operating on many arguments at once. Thus it's a very useful tool to keep in one's utility belt, so I thought it ought to be mentioned.
Use find and exec. Have a look at following
http://tldp.org/LDP/abs/html/moreadv.html
As others have suggested, use find(1):
# Find all files named 'myfile.chunk.*' but NOT named 'myfile.chunk.*.processed'
# under the directory tree rooted at base-directory, and execute a command on
# them:
find base-directory -name 'output.*' '!' -name 'output.*.processed' -exec ./myexecutable -i '{}' -o '{}'.processed ';'
That's what the find
command is for.
http://linux.die.net/man/1/find
From the information provided, it sounds like this would be a completely straightforward translation of your C# idea.
for i in /home/brianly/output-*; do
for j in "$i/"*.[0-9][0-9]; do
./myexecutable -i "$j" -o "$j.processed"
done
done
Something like:
for x in `find /home/brianonly -type f`
do
./yourexecutable -i $x -o $x.processed
done