How many files can I put in a directory?

后端 未结 21 1899
北恋
北恋 2020-11-22 05:15

Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on

相关标签:
21条回答
  • 2020-11-22 05:40

    I have a directory with 88,914 files in it. Like yourself this is used for storing thumbnails and on a Linux server.

    Listed files via FTP or a php function is slow yes, but there is also a performance hit on displaying the file. e.g. www.website.com/thumbdir/gh3hg4h2b4h234b3h2.jpg has a wait time of 200-400 ms. As a comparison on another site I have with a around 100 files in a directory the image is displayed after just ~40ms of waiting.

    I've given this answer as most people have just written how directory search functions will perform, which you won't be using on a thumb folder - just statically displaying files, but will be interested in performance of how the files can actually be used.

    0 讨论(0)
  • 2020-11-22 05:41

    For what it's worth, I just created a directory on an ext4 file system with 1,000,000 files in it, then randomly accessed those files through a web server. I didn't notice any premium on accessing those over (say) only having 10 files there.

    This is radically different from my experience doing this on ntfs a few years back.

    0 讨论(0)
  • 2020-11-22 05:42

    FAT32:

    • Maximum number of files: 268,173,300
    • Maximum number of files per directory: 216 - 1 (65,535)
    • Maximum file size: 2 GiB - 1 without LFS, 4 GiB - 1 with

    NTFS:

    • Maximum number of files: 232 - 1 (4,294,967,295)
    • Maximum file size
      • Implementation: 244 - 26 bytes (16 TiB - 64 KiB)
      • Theoretical: 264 - 26 bytes (16 EiB - 64 KiB)
    • Maximum volume size
      • Implementation: 232 - 1 clusters (256 TiB - 64 KiB)
      • Theoretical: 264 - 1 clusters (1 YiB - 64 KiB)

    ext2:

    • Maximum number of files: 1018
    • Maximum number of files per directory: ~1.3 × 1020 (performance issues past 10,000)
    • Maximum file size
      • 16 GiB (block size of 1 KiB)
      • 256 GiB (block size of 2 KiB)
      • 2 TiB (block size of 4 KiB)
      • 2 TiB (block size of 8 KiB)
    • Maximum volume size
      • 4 TiB (block size of 1 KiB)
      • 8 TiB (block size of 2 KiB)
      • 16 TiB (block size of 4 KiB)
      • 32 TiB (block size of 8 KiB)

    ext3:

    • Maximum number of files: min(volumeSize / 213, numberOfBlocks)
    • Maximum file size: same as ext2
    • Maximum volume size: same as ext2

    ext4:

    • Maximum number of files: 232 - 1 (4,294,967,295)
    • Maximum number of files per directory: unlimited
    • Maximum file size: 244 - 1 bytes (16 TiB - 1)
    • Maximum volume size: 248 - 1 bytes (256 TiB - 1)
    0 讨论(0)
  • 2020-11-22 05:42

    The biggest issue I've run into is on a 32-bit system. Once you pass a certain number, tools like 'ls' stop working.

    Trying to do anything with that directory once you pass that barrier becomes a huge problem.

    0 讨论(0)
  • 2020-11-22 05:44

    It depends a bit on the specific filesystem in use on the Linux server. Nowadays the default is ext3 with dir_index, which makes searching large directories very fast.

    So speed shouldn't be an issue, other than the one you already noted, which is that listings will take longer.

    There is a limit to the total number of files in one directory. I seem to remember it definitely working up to 32000 files.

    0 讨论(0)
  • 2020-11-22 05:45

    I recall running a program that was creating a huge amount of files at the output. The files were sorted at 30000 per directory. I do not recall having any read problems when I had to reuse the produced output. It was on an 32-bit Ubuntu Linux laptop, and even Nautilus displayed the directory contents, albeit after a few seconds.

    ext3 filesystem: Similar code on a 64-bit system dealt well with 64000 files per directory.

    0 讨论(0)
提交回复
热议问题