Compression formats with good support for random access within archives?

前端 未结 13 2262
滥情空心
滥情空心 2020-11-27 11:45

This is similar to a previous question, but the answers there don\'t satisfy my needs and my question is slightly different:

I currently use gzip compression for som

相关标签:
13条回答
  • 2020-11-27 12:14

    I don't know if its been mentioned yet, but the Kiwix project had done great work in this regard. Through their program Kiwix, they offer random access to ZIM file archives. Good compression, too. The project originated when there was a demand for offline copies of the Wikipedia (which has reached above 100 GB in uncompressed form, with all media included). They have successfully taken a 25 GB file (a single-file embodiment of the Wikipedia without most of the media) and compressed it to a measly 8 GB zim file archive. And through the Kiwix program, you can call up any page of the Wikipedia, with all associated data, faster than you can surfing the net.

    Even though Kiwix program is a technology based around the Wikipedia database structure, it proves that you can have excellent compression ratios and random access simultaneously.

    0 讨论(0)
  • 2020-11-27 12:16

    Take a look at dictzip. It is compatible with gzip and allows coarse random access.

    An excerpt from its man page:

    dictzip compresses files using the gzip(1) algorithm (LZ77) in a manner which is completely compatible with the gzip file format. An extension to the gzip file format (Extra Field, described in 2.3.1.1 of RFC 1952) allows extra data to be stored in the header of a compressed file. Programs like gzip and zcat will ignore this extra data. However, [dictzcat --start] will make use of this data to perform pseudo-random access on the file.

    I have the package dictzip in Ubuntu. Or its source code is in a dictd-*.tar.gz. Its license is GPL. You are free to study it.

    Update:

    I improved dictzip to have no file size limit. My implementation is under MIT license.

    0 讨论(0)
  • 2020-11-27 12:16

    I'm not sure if this would be practical in your exact situation, but couldn't you just gzip each large file into smaller files, say 10 MB each? You would end up with a bunch of files: file0.gz, file1.gz, file2.gz, etc. Based on a given offset within the original large, you could search in the file named "file" + (offset / 10485760) + ".gz". The offset within the uncompressed archive would be offset % 10485760.

    0 讨论(0)
  • 2020-11-27 12:16

    Two possible solutions:

    1. Let the OS deal with compression, create and mount a compressed file system (SquashFS, clicfs, cloop, cramfs, e2compr or whatever) containing all your text files and don't do anything about compression in your application program.

    2. Use clicfs directly on each text file (one clicfs per text file) instead of compressing a filesystem image. Think of "mkclicfs mytextfile mycompressedfile" being "gzip <mytextfile >mycompressedfile" and "clicfs mycompressedfile directory" as a way of getting random access to the data via the file "directory/mytextfile".

    0 讨论(0)
  • 2020-11-27 12:22

    This is a very old question but it looks like zindex could provide a good solution (although I don't have much experience with it)

    0 讨论(0)
  • 2020-11-27 12:25

    razip supports random access with better performance than gzip/bzip2 which have to be tweaked for this support - reducing compression at the expense of "ok" random access:

    http://sourceforge.net/projects/razip/

    0 讨论(0)
提交回复
热议问题