How many files can I put in a directory?

后端 未结 21 1916
北恋
北恋 2020-11-22 05:15

Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on

相关标签:
21条回答
  • 2020-11-22 05:28

    Keep in mind that on Linux if you have a directory with too many files, the shell may not be able to expand wildcards. I have this issue with a photo album hosted on Linux. It stores all the resized images in a single directory. While the file system can handle many files, the shell can't. Example:

    -shell-3.00$ ls A*
    -shell: /bin/ls: Argument list too long
    

    or

    -shell-3.00$ chmod 644 *jpg
    -shell: /bin/chmod: Argument list too long
    
    0 讨论(0)
  • 2020-11-22 05:29

    It absolutely depends on the filesystem. Many modern filesystems use decent data structures to store the contents of directories, but older filesystems often just added the entries to a list, so retrieving a file was an O(n) operation.

    Even if the filesystem does it right, it's still absolutely possible for programs that list directory contents to mess up and do an O(n^2) sort, so to be on the safe side, I'd always limit the number of files per directory to no more than 500.

    0 讨论(0)
  • 2020-11-22 05:34

    I'm working on a similar problem right now. We have a hierarchichal directory structure and use image ids as filenames. For example, an image with id=1234567 is placed in

    ..../45/67/1234567_<...>.jpg
    

    using last 4 digits to determine where the file goes.

    With a few thousand images, you could use a one-level hierarchy. Our sysadmin suggested no more than couple of thousand files in any given directory (ext3) for efficiency / backup / whatever other reasons he had in mind.

    0 讨论(0)
  • 2020-11-22 05:35

    The question comes down to what you're going to do with the files.

    Under Windows, any directory with more than 2k files tends to open slowly for me in Explorer. If they're all image files, more than 1k tend to open very slowly in thumbnail view.

    At one time, the system-imposed limit was 32,767. It's higher now, but even that is way too many files to handle at one time under most circumstances.

    0 讨论(0)
  • 2020-11-22 05:37

    ext3 does in fact have directory size limits, and they depend on the block size of the filesystem. There isn't a per-directory "max number" of files, but a per-directory "max number of blocks used to store file entries". Specifically, the size of the directory itself can't grow beyond a b-tree of height 3, and the fanout of the tree depends on the block size. See this link for some details.

    https://www.mail-archive.com/cwelug@googlegroups.com/msg01944.html

    I was bitten by this recently on a filesystem formatted with 2K blocks, which was inexplicably getting directory-full kernel messages warning: ext3_dx_add_entry: Directory index full! when I was copying from another ext3 filesystem. In my case, a directory with a mere 480,000 files was unable to be copied to the destination.

    0 讨论(0)
  • 2020-11-22 05:39

    I respect this doesn't totally answer your question as to how many is too many, but an idea for solving the long term problem is that in addition to storing the original file metadata, also store which folder on disk it is stored in - normalize out that piece of metadata. Once a folder grows beyond some limit you are comfortable with for performance, aesthetic or whatever reason, you just create a second folder and start dropping files there...

    0 讨论(0)
提交回复
热议问题