问题
I have this home work where I have to transfer a very big file from one source to multiple machines using bittorrent kinda of algorithm. Initially I am cutting the files in to chunks and I transfer chunks to all the targets. Targets have the intelligence to share the chunks they have with other targets. It works fine. I wanted to transfer a 4GB file so I tarred four 1GB files. It didn't error out when I created the 4GB tar file but at the other end while assembling all the chunks back to the original file it errors out saying file size limit exceeded. How can I go about solving this 2GB limitation problem?
回答1:
I can think of two possible reasons:
- You don't have Large File Support enabled in your Linux kernel
- Your application isn't compiled with large file support (you might need to pass gcc extra flags to tell it to use 64-bit versions of certain file I/O functions. e.g.
gcc -D_FILE_OFFSET_BITS=64
)
回答2:
This depends on the filesystem type. When using ext3, I have no such problems with files that are significantly larger.
If the underlying disk is FAT, NTFS or CIFS (SMB), you must also make sure you use the latest version of the appropriate driver. There are some older drivers that have file-size limits like the ones you experience.
回答3:
Could this be related to a system limitation configuration ?
$ ulimit -a
vi /etc/security/limits.conf
vivek hard fsize 1024000
If you do not want any limit remove fsize
from /etc/security/limits.conf
.
回答4:
If your system supports it, you can get hints with: man largefile
.
回答5:
You should use fseeko and ftello, see fseeko(3)
Note you should define #define _FILE_OFFSET_BITS 64
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
来源:https://stackoverflow.com/questions/560238/how-to-create-a-file-of-size-more-than-2gb-in-linux-unix