I\'m new to 64-bits architecture. Could you tell me what\'s MAX file size supported by file mapping in 64 bits linux machine. I want to open more than 20GB files by file map
Agree with MarkR, you are dereference an invalid address.
// A bug in these lines.
unsigned char* pCur = pBegin + GBSIZE;
printf("%c",*pCur);
unsigned char* pEnd = pBegin + NUMSIZE;
unsigned char* pLast = pEnd - 1;
unsigned char* pCur = pLast;
I modified your code to use HUGE TLB flags as the following.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/mman.h>
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
#define MAP_HUGE_SHIFT 26
#define MAP_HUGE_1GB (30 << MAP_HUGE_SHIFT)
#define KSIZE 1024L
#define MSIZE (1024L*1024L)
#define GSIZE (1024L*1024L*1024L)
#define TSIZE (1024L*GSIZE)
#define INIT_MEM 0
// Fail on my MacBook Pro (Retina, 13-inch, Early 2015)
// Darwin Kernel Version 16.5.0:x86_64
// #define NUMSIZE (16L * TSIZE)
// mmap ok; init: got killed; signal 9
// #define NUMSIZE (8L * TSIZE)
// Got killed signal 9
// #define NUMSIZE (1L * TSIZE)
// OK
// #define NUMSIZE (200L * GSIZE)
// OK
#define NUMSIZE (20L * GSIZE)
typedef unsigned long long ETYPE;
#define MEMSIZE (NUMSIZE*sizeof(ETYPE))
#define PGSIZE (16*KSIZE)
void init(ETYPE* ptr) {
*ptr = (ETYPE)ptr;
}
int verify(ETYPE* ptr) {
if (*ptr != (ETYPE)ptr) {
fprintf(stderr, "ERROR: 0x%016llx != %p.\n", *ptr, ptr);
return -1;
}
else {
fprintf(stdout, "OK: 0x%016llx = %p.\n", *ptr, ptr);
}
return 0;
}
int main(int argc, char *argv[])
{
int i;
int fd;
ETYPE *pBegin;
int flags = MAP_SHARED | MAP_ANONYMOUS | MAP_HUGETLB | MAP_HUGE_1GB;
printf("mmap memory size:%lu GB\n", MEMSIZE/GSIZE);
pBegin = (ETYPE*) mmap(0, MEMSIZE, PROT_READ | PROT_WRITE, flags, -1, 0);
if (pBegin == MAP_FAILED) {
perror("Error mmapping the file");
exit(EXIT_FAILURE);
}
ETYPE* pEnd = pBegin + NUMSIZE;
ETYPE* pCur = pBegin;
#if INIT_MEM
while (pCur < pEnd) {
init(pCur);
// ++pCur; //slow if init all addresses.
pCur += (PGSIZE/sizeof(ETYPE));
}
#endif
init(&pBegin[0]);
init(&pBegin[NUMSIZE-1]);
verify(&pBegin[0]);
verify(&pBegin[NUMSIZE-1]);
if (munmap(pBegin, MEMSIZE) == -1) {
perror("Error un-mmapping the file");
}
return 0;
}
Although pointers are 64-bit wide, most processors do not actually support virtual addresses using the full 64 bits. To see what size virtual addresses your processor supports, look in /proc/cpuinfo
(48 bits is typical).
grep "address sizes" /proc/cpuinfo
Additionally, half of the virtual address space is used by the kernel and not available to userspace - leaving 47 bits in the current Linux implementation.
However, even taking this into account, you will still have plenty of room for a 20GB file. 47 bits in theory means a virtual address space of 128TB.
64-bit addresses allow for many orders of magnitude more than 20 GB.
From the mmap(2)
man page:
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, off_t offset);
length
is a size_t
, which on 64-bit machines is 64 bits in length. Therefore yes, you can theoretically map a 20GB file.
(This answer was originally edited into the question by OP)
You have requested a 20GB map onto a file which was only 50MB in size.
As described by the mmap man page, mmap
succeeds when you request the length too big, however it will give SIGBUS
or SIGSEGV
when you actually try to read beyond the end of the underlying file.