Many small files or one big file? (Or, Overhead of opening and closing file handles) (C++)

后端 未结 6 628
春和景丽
春和景丽 2021-02-01 21:22

I have created an application that does the following:

  1. Make some calculations, write calculated data to a file - repeat for 500,000 times (over al
相关标签:
6条回答
  • 2021-02-01 21:32

    Each file is ~212k, so over all i have ~300Gb of data. It looks like the entire process takes ~40 days ...a ll the calculations are serial (each calculation is dependent on the one before), so i can't parallel this process to different CPUs or PCs. ... pretty sure the most of the overhead goes to file system access ... Every time i access a file i open a handle to it and then close it once i finish reading the data.

    Writing data 300GB of data serially might take 40 minutes, only a tiny fraction of 40 days. Disk write performance shouldn't be an issue here.

    Your idea of opening the file only once is spot-on. Probably closing the file after every operation is causing your processing to block until the disk has completely written out all the data, negating the benefits of disk caching.

    My bet is the fastest implementation of this application will use a memory-mapped file, all modern operating systems have this capability. It can end up being the simplest code, too. You'll need a 64-bit processor and operating system, you should not need 300GB of RAM. Map the whole file into address space at one time and just read and write your data with pointers.

    0 讨论(0)
  • 2021-02-01 21:33

    What about using SQLite? I think you can get away with a single table.

    0 讨论(0)
  • 2021-02-01 21:39

    Before making any changes it might be useful to run a profiler trace to figure out where most of the time is spent to make sure you actually optimize the real problem.

    0 讨论(0)
  • 2021-02-01 21:41

    From your brief explaination it sounds like xtofl suggestion of threads is the correct way to go. I would recommend you profile your application first though to ensure that the time is divided between IO an cpu.

    Then I would consider three threads joined by two queues.

    1. Thread 1 reads files and loads them into ram, then places data/pointers in the queue. If the queue goes over a certain size the thread sleeps, if it goes below a certain size if starts again.
    2. Thread 2 reads the data off the queue and does the calculations then writes the data to the second queue
    3. Thread 3 reads the second queue and writes the data to disk

    You could consider merging thread 1 and 3, this might reduce contention on the disk as your app would only do one disk op at a time.

    Also how does the operating system handle all the files? Are they all in one directory? What is performance like when you browse the directory (gui filemanager/dir/ls)? If this performance is bad you might be working outside your file systems comfort zone. Although you could only change this on unix, some file systems are optimised for different types of file usage, eg large files, lots of small files etc. You could also consider splitting the files across different directories.

    0 讨论(0)
  • 2021-02-01 21:46

    Opening a file handle isn't probable to be the bottleneck; actual disk IO is. If you can parallelize disk access (by e.g. using multiple disks, faster disks, a RAM disk, ...) you may benefit way more. Also, be sure to have IO not block the application: read from disk, and process while waiting for IO. E.g. with a reader and a processor thread.

    Another thing: if the next step depends on the current calculation, why go through the effort of saving it to disk? Maybe with another view on the process' dependencies you can rework the data flow and get rid of a lot of IO.

    Oh yes, and measure it :)

    0 讨论(0)
  • 2021-02-01 21:46

    Using memory mapped files should be investigated as it will reduce the number of system calls.

    0 讨论(0)
提交回复
热议问题