There are many posts on the internet about the ReadDirectoryChangesW API function missing files when there is a lot of file activity. Most blame the speed at which the ReadDirectoryChangesW function loop is called. This is an incorrect assumption. The best explanation I have seen is in the following post, the comment on Monday, April 14, 2008 2:15:27 PM
http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/4465cafb-f4ed-434f-89d8-c85ced6ffaa8/
The summary is that the ReadDirectoryChangesW function reports file changes as they leave the file-write-behind queue, not as they are added. And if too many are added before being committed, you lose notice on some of them. You can see this with your implementation, if you just write a program to generate a 1000+ files in a directory real quick. Just count how many file event notices you get and you will see there are times when you will not receive all of them.
The question is, has anyone found a reliable method to use the ReadDirectoryChangesW function without having to flush the volume each time? This is not allowed if the user is not an Administrator and can also take some time to complete.
If the API is unreliable, then a workaround may be your only option. That of course likely involves keeping track of lastmodified and filenames. What this doesn't mean is that you need to poll when looking for changes, rather, you can use the FileSystemWatcher as a means to trigger checking.
So if you keep track of the last 50-100 times the ReadDirectoryChangesW/FSW event happened, and you see that it is being called rapidly, you can detect this and trigger the special condition to get all the files that have been changed (and set a flag to prevent future bogus FSW events temporarily) in a few seconds.
Since some people are confused in the comments about this solution, I am proposing that you should monitor how fast events are arriving from the ReadDirectoryChangesW and when they are arriving too fast, try to attempt a workaround (usually a manual sweep of a directory).
We've never seen ReadDirectoryChangesW to be 100% reliable. But, the best way to handle it is separate the "reporting" from the "handling".
My implementation has a thread which has only one job, to re-queue all events. Then a second thread to process my intermediate queue. You basically, want to impede the reporting of events as little as possible.
Under high CPU situations, you can also impede the reporting of watcher events.
I met same problem. But, I didn't find a solution that guarantee to get all of events. In several tests, I could know that ReadDirectoryChangesW function should be called again as fast as possible after GetQueuedCompletionStatus function returned. I guess if a processing speed of filesystem is very faster than my application processing speed, the application might be able to lose some events.
Anyway, I separated a parsing logic from a monitoring logic and placed a parsing logic on a thread.
来源:https://stackoverflow.com/questions/57254/how-to-keep-readdirectorychangesw-from-missing-file-changes