I often use the find
command on Linux and macOS. I just discovered the command parallel
, and I would like to combine it with find
command
Parallel processing makes sense when your work is CPU bound (the CPU does the work, and the peripherals are mostly idle) but here, you are trying to improve the performance of a task which is I/O bound (the CPU is mostly idle, waiting for a busy peripheral). In this situation, adding parallelism will only add congestion, as multiple tasks will be fighting over the already-starved I/O bandwidth between them.
On macOS, the system already indexes all your data anyway (including the contents of word-processing documents, PDFs, email messages, etc); there's a friendly magnifying glass on the menu bar at the upper right where you can access a much faster and more versatile search, called Spotlight. (Though I agree that some of the more sophisticated controls of find
are missing; and the "user friendly" design gets in the way for me when it guesses what I want, and guesses wrong.)
Some Linux distros offer a similar facility; I would expect that to be the norm for anything with a GUI these days, though the details will differ between systems.
A more traditional solution on any Unix-like system is the locate command, which performs a similar but more limited task; it will create a (very snappy) index on file names, so you can say
locate fnord
to very quickly obtain every file whose name matches fnord
. The index is simply a copy of the results of a find
run from last night (or however you schedule the backend to run). The command is already installed on macOS, though you have to enable the back end if you want to use it. (Just run locate locate
to get further instructions.)
You could build something similar yourself if you find yourself often looking for files with a particular set of permissions and a particular owner, for example (these are not features which locate
records); just run a nightly (or hourly etc) find
which collects these features into a database -- or even just a text file -- which you can then search nearly instantly.
For running jobs in parallel, you don't really need GNU parallel
, though it does offer a number of conveniences and enhancements for many use cases; you already have xargs -P
. (The xargs
on macOS which originates from BSD is more limited than GNU xargs
which is what you'll find on many Linuxes; but it does have the -P
option.)
For example, here's how to run eight parallel find
instances with xargs -P
:
printf '%s\n' */ | xargs -I {} -P 8 find {} -name '*.ogg'
(This assumes the wildcard doesn't match directories which contain single quotes or newlines or other shenanigans; GNU xargs
has the -0
option to fix a large number of corner cases like that; then you'd use '%s\0'
as the format string for printf
.)
As the parallel documentation readily explains, its general syntax is
parallel -options command ...
where {}
will be replaced with the current input line (if it is missing, it will be implicitly added at the end of command ...
) and the (obviously optional) :::
special token allows you to specify an input source on the command line instead of as standard input.
Anything outside of those special tokens is passed on verbatim, so you can add find
options at your heart's content just by specifying them literally.
parallel -j8 find {} -type f -name '*.ogg' ::: */
I don't speak zsh
but refactored for regular POSIX sh
your function could be something like
ff () {
parallel -j8 find {} -type f -iname "$2" ::: "$1"
}
though I would perhaps switch the arguments so you can specify a name pattern and a list of files to search, à la grep
.
ff () {
# "local" is not POSIX but works in many sh versions
local pat=$1
shift
parallel -j8 find {} -type f -iname "$pat" ::: "$@"
}
But again, spinning your disk to find things which are already indexed is probably something you should stop doing, rather than facilitate.
As others have written find
is I/O heavy and most likely not limited by your CPUs.
But depending on your disks it can be better to run the jobs in parallel.
NVMe disks are known for performing best if there are 4-8 accesses running in parallel. Some network file systems also work faster with multiple processes.
So some level of parallelization can make sense, but you really have to measure to be sure.
To parallelize find
with 8 jobs running in parallel:
parallel -j8 find {} ::: *
This works best if you are in a dir that has many subdirs: Each subdir will then be searched in parallel. Otherwise this may work better:
parallel -j8 find {} ::: */*
Basically the same idea, but now using subdirs of dirs.
If you want the results printed as soon as they are found (and not after the find
is finished) use --line-buffer
(or --lb
):
parallel --lb -j8 find {} ::: */*
To learn about GNU Parallel spend 20 minutes reading chapter 1+2 of https://doi.org/10.5281/zenodo.1146014 and print the cheat sheet: https://www.gnu.org/software/parallel/parallel_cheat.pdf
Your command line will thank you for it.
Just use background running at each first level paths separately
In example below will create 12 subdirectories analysis
$ for i in [A-Z]*/ ; do find "$i" -name "*.ogg" & >> logfile ; done
[1] 16945
[2] 16946
[3] 16947
# many lines
[1] Done find "$i" -name "*.ogg"
[2] Done find "$i" -name "*.ogg"
#many lines
[11] Done find "$i" -name "*.ogg"
[12] Done find "$i" -name "*.ogg"
$
Doing so creates many find process the system will dispatch on different cores as any other.
Note 1: it looks a little pig way to do so but it just works..
Note 2: the find
command itself is not taking hard on cpus/cores this is 99% of use-case just useless because the find process will spend is time to wait for I/O from disks. Then using parallel or similar commands won't work*
You appear to want to be able to locate files quickly in large directories under macOS. I think the correct tool for that job is mdfind
.
I made a hierarchy with 10,000,000 files under my home directory, all with unique names that resemble UUIDs, e.g. 80104d18-74c9-4803-af51-9162856bf90d
. I then tried to find one with:
mdfind -onlyin ~ -name 80104d18-74c9-4803-af51-9162856bf90d
The result was instantaneous and too fast to measure the time, so I did 100 lookups and it took under 20s, so on average a lookup takes 0.2s.
If you actually wanted to locate 100 files, you can group them into a single search like this:
mdfind -onlyin ~ 'kMDItemDisplayName==ffff4bbd-897d-4768-99c9-d8434d873bd8 || kMDItemDisplayName==800e8b37-1f22-4c7b-ba5c-f1d1040ac736 || kMDItemDisplayName==800e8b37-1f22-4c7b-ba5c-f1d1040ac736'
and it executes even faster.
If you only know a partial filename, you can use:
mdfind -onlyin ~ "kMDItemDisplayName = '*cdd90b5ef351*'"
/Users/mark/StackOverflow/MassiveDirectory/800f0058-4021-4f2d-8f5c-cdd90b5ef351
You can also use creation dates, file types, author, video duration, or tags in your search. For example, you can find all PNG images whose name contains "25DD954D73AF" like this:
mdfind -onlyin ~ "kMDItemKind = 'PNG image' && kMDItemDisplayName = '*25DD954D73AF*'"
/Users/mark/StackOverflow/MassiveDirectory/9A91A1C4-C8BF-467E-954E-25DD954D73AF.png
If you want to know what fields you can search on, take a file of the type you want to be able to look for, and run mdls
on it and you will see all the fields that macOS knows about:
mdls SomeMusic.m4a
mdls SomeVideo.avi
mdls SomeMS-WordDocument.doc
More examples here.
Also, unlike with locate
, there is no need to update a database frequently.