I got a sparse file of 1TB which stores actually 32MB data on Linux.
Is it possible to "efficiently" make a package to store the sparse file? The package should be unpacked to be a 1TB sparse file on another computer. Ideally, the "package" should be around 32MB.
Note: On possible solution is to use 'tar': https://wiki.archlinux.org/index.php/Sparse_file#Archiving_with_.60tar.27
However, for a 1TB sparse file, although the tar ball may be small, archiving the sparse file will take too long a time.
Edit 1
I tested the tar and gzip and the results are as follows (Note that this sparse file contains data of 0 byte).
$ du -hs sparse-1
0 sparse-1
$ ls -lha sparse-1
-rw-rw-r-- 1 user1 user1 1.0T 2012-11-03 11:17 sparse-1
$ time tar cSf sparse-1.tar sparse-1
real 96m19.847s
user 22m3.314s
sys 52m32.272s
$ time gzip sparse-1
real 200m18.714s
user 164m33.835s
sys 10m39.971s
$ ls -lha sparse-1*
-rw-rw-r-- 1 user1 user1 1018M 2012-11-03 11:17 sparse-1.gz
-rw-rw-r-- 1 user1 user1 10K 2012-11-06 23:13 sparse-1.tar
The 1TB file sparse-1 which contains 0 byte data can be archived by 'tar' to a 10KB tar ball or compressed by gzip to a ~1GB file. gzip takes around 2 times of the time than the time tar uses.
From the comparison, 'tar' seems better than gzip.
However, 96 minutes are too long for a sparse file that contains data of 0 byte.
Edit 2
rsync
seems finish copying the file in more time than tar
but less than gzip
:
$ time rsync --sparse sparse-1 sparse-1-copy
real 124m46.321s
user 107m15.084s
sys 83m8.323s
$ du -hs sparse-1-copy
4.0K sparse-1-copy
Hence, tar
+ cp
or scp
should be faster than directly rsync
for this extremely sparse file.
Edit 3
Thanks to @mvp for pointing out the SEEK_HOLE functionality in newer kernel. (I previously work on a 2.6.32 Linux kernel).
Note: bsdtar version >=3.0.4 is required (check here: http://ask.fclose.com/4/how-to-efficiently-archive-a-very-large-sparse-file?show=299#c299 ).
On a newer kernel and Fedora release (17), tar
and cp
handles the sparse file very efficiently.
[zma@office tmp]$ ls -lh pmem-1
-rw-rw-r-- 1 zma zma 1.0T Nov 7 20:14 pmem-1
[zma@office tmp]$ time tar cSf pmem-1.tar pmem-1
real 0m0.003s
user 0m0.003s
sys 0m0.000s
[zma@office tmp]$ time cp pmem-1 pmem-1-copy
real 0m0.020s
user 0m0.000s
sys 0m0.003s
[zma@office tmp]$ ls -lh pmem*
-rw-rw-r-- 1 zma zma 1.0T Nov 7 20:14 pmem-1
-rw-rw-r-- 1 zma zma 1.0T Nov 7 20:15 pmem-1-copy
-rw-rw-r-- 1 zma zma 10K Nov 7 20:15 pmem-1.tar
[zma@office tmp]$ mkdir t
[zma@office tmp]$ cd t
[zma@office t]$ time tar xSf ../pmem-1.tar
real 0m0.003s
user 0m0.000s
sys 0m0.002s
[zma@office t]$ ls -lha
total 8.0K
drwxrwxr-x 2 zma zma 4.0K Nov 7 20:16 .
drwxrwxrwt. 35 root root 4.0K Nov 7 20:16 ..
-rw-rw-r-- 1 zma zma 1.0T Nov 7 20:14 pmem-1
I am using a 3.6.5 kernel:
[zma@office t]$ uname -a
Linux office.zhiqiangma.com 3.6.5-1.fc17.x86_64 #1 SMP Wed Oct 31 19:37:18 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Short answer:
Use bsdtar
or GNU tar
(version 1.29 or later) to create archives, and GNU tar
(version 1.26 or later) to extract them on another box.
Long answer: There are some requirements for this to work.
First, Linux must be at least kernel 3.1 (Ubuntu 12.04 or later would do), so it supports SEEK_HOLE
functionality.
Then, you need tar utility that can support this syscall. GNU tar
supports it since version 1.29 (released on 2016/05/16, it should be present by default since Ubuntu 18.04), or bsdtar
since version 3.0.4 (available since Ubuntu 12.04) - install it using sudo apt-get install bsdtar
.
While bsdtar
(which uses libarchive
) is awesome, unfortunately, it is not very smart when it comes to untarring - it stupidly requires to have at least as much free space on target drive as untarred file size, without regard to holes. GNU tar
will untar such sparse archives efficiently and will not check this condition.
This is log from Ubuntu 12.10 (Linux kernel 3.5):
$ dd if=/dev/zero of=1tb seek=1T bs=1 count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.000143113 s, 7.0 kB/s
$ time bsdtar cvfz sparse.tar.gz 1tb
a 1tb
real 0m0.362s
user 0m0.336s
sys 0m0.020s
# Or, use gnu tar if version is later than 1.29:
$ time tar cSvfz sparse-gnutar.tar.gz 1tb
1tb
real 0m0.005s
user 0m0.006s
sys 0m0.000s
$ ls -l
-rw-rw-r-- 1 autouser autouser 1099511627777 Nov 7 01:43 1tb
-rw-rw-r-- 1 autouser autouser 257 Nov 7 01:43 sparse.tar.gz
-rw-rw-r-- 1 autouser autouser 134 Nov 7 01:43 sparse-gnutar.tar.gz
$
Like I said above, unfortunately, untarring with bsdtar
will not work unless you have 1TB free space. However, any version of GNU tar
works just fine to untar such sparse.tar
:
$ rm 1tb
$ time tar -xvSf sparse.tar.gz
1tb
real 0m0.031s
user 0m0.016s
sys 0m0.016s
$ ls -l
total 8
-rw-rw-r-- 1 autouser autouser 1099511627777 Nov 7 01:43 1tb
-rw-rw-r-- 1 autouser autouser 257 Nov 7 01:43 sparse.tar.gz
From a related question, maybe rsync
will work:
rsync --sparse sparse-1 sparse-1-copy
I realize this question is very old, but here's an update that may be helpful to others who find their way here the same way I did.
Thankfully, mvp's excellent answer is now obsolete. According to the GNU tar release notes, SEEK_HOLE/SEEK_DATA was added in v. 1.29, released 2016-05-16. (And with GNU tar v. 1.30 being standard in Debian stable now, it's safe to assume that tar version ≥ 1.29 is available almost everywhere.)
So the way to handle sparse files now is to archive them with whichever tar (GNU or BSD) is installed on your system, and same for extracting.
Additionally, for sparse files that actually contain some data, if it's worthwhile to use compression (ie the data is compressible enough to save substantial disk space, and the disk space savings are worth the likely-substantial time and CPU resources required to compress it):
tar -cSjf <archive>.tar.bz2 /path/to/sparse/file
will both take advantage of tar's SEEK_HOLE functionality to efficiently archive the sparse file and use bzip2 to compress the actual data.tar --use-compress-program=pbzip2 -cSf <archive>.tar.bz2 /path/to/sparse/file
, as alluded to in marcin's comment, will do the same while also taking advantage of multiple cores for the compression task.
On my little home server with a quad-core Atom CPU, using pbzip2
vs bzip2
reduced the time by around 25 or 30%.
With or without compression, this will give you an archive that doesn't need any special sparse-file handling, takes up approximately the 'real' size of the original sparse file (or less if compressed), and can be easily moved around with cp
, rsync
(both of which can be used on the original sparse file without trashing the sparseness), or scp
(which can't).
Additional Notes
- When extracting, tar will automatically detect an archive created with
-S
so there's no need to specify it. - An archive created with
pbzip2
is stored in chunks. This results in the archive being marginally bigger than ifbzip2
is used, but also means that the extraction can be multithreaded, unlike an archive created withbzip2
. pbzip2
andbzip2
will reliably extract each other's archives without error or corruption.
You're definitely looking for a compression tool such as tar
, lzma
, bzip2
, zip
or rar
. According to this site, lzma
is quite fast while still having quite a good compression ratio:
http://blog.terzza.com/linux-compression-comparison-gzip-vs-bzip2-vs-lzma-vs-zip-vs-compress/
You can also adjust the speed/quality ratio of the compression by setting the compression level to something low, experiment a bit to find a level that works best
来源:https://stackoverflow.com/questions/13252682/copying-a-1tb-sparse-file