How many files can I put in a directory?

后端 未结 21 1914
北恋
北恋 2020-11-22 05:15

Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on

相关标签:
21条回答
  • 2020-11-22 05:21

    I ran into a similar issue. I was trying to access a directory with over 10,000 files in it. It was taking too long to build the file list and run any type of commands on any of the files.

    I thought up a little php script to do this for myself and tried to figure a way to prevent it from time out in the browser.

    The following is the php script I wrote to resolve the issue.

    Listing Files in a Directory with too many files for FTP

    How it helps someone

    0 讨论(0)
  • 2020-11-22 05:25

    I've been having the same issue. Trying to store millions of files in a Ubuntu server in ext4. Ended running my own benchmarks. Found out that flat directory performs way better while being way simpler to use:

    Wrote an article.

    0 讨论(0)
  • 2020-11-22 05:25

    If the time involved in implementing a directory partitioning scheme is minimal, I am in favor of it. The first time you have to debug a problem that involves manipulating a 10000-file directory via the console you will understand.

    As an example, F-Spot stores photo files as YYYY\MM\DD\filename.ext, which means the largest directory I have had to deal with while manually manipulating my ~20000-photo collection is about 800 files. This also makes the files more easily browsable from a third party application. Never assume that your software is the only thing that will be accessing your software's files.

    0 讨论(0)
  • 2020-11-22 05:25

    It really depends on the filesystem used, and also some flags.

    For example, ext3 can have many thousands of files; but after a couple of thousands, it used to be very slow. Mostly when listing a directory, but also when opening a single file. A few years ago, it gained the 'htree' option, that dramatically shortened the time needed to get an inode given a filename.

    Personally, I use subdirectories to keep most levels under a thousand or so items. In your case, I'd create 256 directories, with the two last hex digits of the ID. Use the last and not the first digits, so you get the load balanced.

    0 讨论(0)
  • 2020-11-22 05:26

    What most of the answers above fail to show is that there is no "One Size Fits All" answer to the original question.

    In today's environment we have a large conglomerate of different hardware and software -- some is 32 bit, some is 64 bit, some is cutting edge and some is tried and true - reliable and never changing. Added to that is a variety of older and newer hardware, older and newer OSes, different vendors (Windows, Unixes, Apple, etc.) and a myriad of utilities and servers that go along. As hardware has improved and software is converted to 64 bit compatibility, there has necessarily been considerable delay in getting all the pieces of this very large and complex world to play nicely with the rapid pace of changes.

    IMHO there is no one way to fix a problem. The solution is to research the possibilities and then by trial and error find what works best for your particular needs. Each user must determine what works for their system rather than using a cookie cutter approach.

    I for example have a media server with a few very large files. The result is only about 400 files filling a 3 TB drive. Only 1% of the inodes are used but 95% of the total space is used. Someone else, with a lot of smaller files may run out of inodes before they come near to filling the space. (On ext4 filesystems as a rule of thumb, 1 inode is used for each file/directory.) While theoretically the total number of files that may be contained within a directory is nearly infinite, practicality determines that the overall usage determine realistic units, not just filesystem capabilities.

    I hope that all the different answers above have promoted thought and problem solving rather than presenting an insurmountable barrier to progress.

    0 讨论(0)
  • 2020-11-22 05:26

    "Depends on filesystem"
    Some users mentioned that the performance impact depends on the used filesystem. Of course. Filesystems like EXT3 can be very slow. But even if you use EXT4 or XFS you can not prevent that listing a folder through ls or find or through an external connection like FTP will become slower an slower.

    Solution
    I prefer the same way as @armandino. For that I use this little function in PHP to convert IDs into a filepath that results 1000 files per directory:

    function dynamic_path($int) {
        // 1000 = 1000 files per dir
        // 10000 = 10000 files per dir
        // 2 = 100 dirs per dir
        // 3 = 1000 dirs per dir
        return implode('/', str_split(intval($int / 1000), 2)) . '/';
    }
    

    or you could use the second version if you want to use alpha-numeric characters:

    function dynamic_path2($str) {
        // 26 alpha + 10 num + 3 special chars (._-) = 39 combinations
        // -1 = 39^2 = 1521 files per dir
        // -2 = 39^3 = 59319 files per dir (if every combination exists)
        $left = substr($str, 0, -1);
        return implode('/', str_split($left ? $left : $str[0], 2)) . '/';
    }
    

    results:

    <?php
    $files = explode(',', '1.jpg,12.jpg,123.jpg,999.jpg,1000.jpg,1234.jpg,1999.jpg,2000.jpg,12345.jpg,123456.jpg,1234567.jpg,12345678.jpg,123456789.jpg');
    foreach ($files as $file) {
        echo dynamic_path(basename($file, '.jpg')) . $file . PHP_EOL;
    }
    ?>
    
    1/1.jpg
    1/12.jpg
    1/123.jpg
    1/999.jpg
    1/1000.jpg
    2/1234.jpg
    2/1999.jpg
    2/2000.jpg
    13/12345.jpg
    12/4/123456.jpg
    12/35/1234567.jpg
    12/34/6/12345678.jpg
    12/34/57/123456789.jpg
    
    <?php
    $files = array_merge($files, explode(',', 'a.jpg,b.jpg,ab.jpg,abc.jpg,ffffd.jpg,af_ff.jpg,abcd.jpg,akkk.jpg,bf.ff.jpg,abc-de.jpg,abcdef.jpg,abcdefg.jpg,abcdefgh.jpg,abcdefghi.jpg'));
    foreach ($files as $file) {
        echo dynamic_path2(basename($file, '.jpg')) . $file . PHP_EOL;
    }
    ?>
    
    1/1.jpg
    1/12.jpg
    12/123.jpg
    99/999.jpg
    10/0/1000.jpg
    12/3/1234.jpg
    19/9/1999.jpg
    20/0/2000.jpg
    12/34/12345.jpg
    12/34/5/123456.jpg
    12/34/56/1234567.jpg
    12/34/56/7/12345678.jpg
    12/34/56/78/123456789.jpg
    a/a.jpg
    b/b.jpg
    a/ab.jpg
    ab/abc.jpg
    dd/ffffd.jpg
    af/_f/af_ff.jpg
    ab/c/abcd.jpg
    ak/k/akkk.jpg
    bf/.f/bf.ff.jpg
    ab/c-/d/abc-de.jpg
    ab/cd/e/abcdef.jpg
    ab/cd/ef/abcdefg.jpg
    ab/cd/ef/g/abcdefgh.jpg
    ab/cd/ef/gh/abcdefghi.jpg
    

    As you can see for the $int-version every folder contains up to 1000 files and up to 99 directories containing 1000 files and 99 directories ...

    But do not forget that to many directories cause the same performance problems!

    Finally you should think about how to reduce the amount of files in total. Depending on your target you can use CSS sprites to combine multiple tiny images like avatars, icons, smilies, etc. or if you use many small non-media files consider combining them e.g. in JSON format. In my case I had thousands of mini-caches and finally I decided to combine them in packs of 10.

    0 讨论(0)
提交回复
热议问题