I have one script that only writes data to stdout
. I need to run it for multiple files and generate a different output file for each input file and I was wonder
A simple solution would be to put a wrapper around your script:
#!/bin/sh
myscript "$1" > "$1.stdout"
Call it myscript2
and invoke it with find:
find . -type f -exec myscript2 {} \;
Note that although most implementations of find allow you to do what you have done, technically the behavior of find is unspecified if you use {}
more than once in the argument list of -exec
.
You can do it with eval. It may be ugly, but so is having to make a shell script for this. Plus, it's all on one line. For example
find -type f -exec bash -c "eval md5sum {} > {}.sum " \;
If you export your environment variables, they'll already be present in the child shell (If you use bash -c
instead of sh -c
, and your parent shell is itself bash, then you can also export functions in the parent shell and have them usable in the child; see export -f
).
Moreover, by using -exec ... {} +
, you can limit the number of shells to the smallest possible number needed to pass all arguments on the command line:
set -a # turn on automatic export of all variables
source initscript1
source initscript2
# pass as many filenames as possible to each sh -c, iterating over them directly
find * -name '*.stdout' -prune -o -type f \
-exec sh -c 'for arg; do myscript "$arg" > "${arg}.stdout"' _ {} +
Alternately, you can just perform the execution in your current shell directly:
while IFS= read -r -d '' filename; do
myscript "$filename" >"${filename}.out"
done < <(find * -name '*.stdout' -prune -o -type f -print0)
See UsingFind discussing safely and correctly performing bulk actions through find
; and BashFAQ #24 discussing the use of process substitution (the <(...)
syntax) to ensure that operations are performed in the parent shell.