Can you allocate a very large single chunk of memory ( > 4GB ) in c or c++?

后端 未结 10 768
面向向阳花
面向向阳花 2020-12-02 10:46

With very large amounts of ram these days I was wondering, it is possible to allocate a single chunk of memory that is larger than 4GB? Or would I need to allocate a bunch o

相关标签:
10条回答
  • 2020-12-02 11:05

    If size_t is greater than 32 bits on your system, you've cleared the first hurdle. But the C and C++ standards aren't responsible for determining whether any particular call to new or malloc succeeds (except malloc with a 0 size). That depends entirely on the OS and the current state of the heap.

    0 讨论(0)
  • 2020-12-02 11:06

    it depends on which C compiler you're using, and on what platform (of course) but there's no fundamental reason why you cannot allocate the largest chunk of contiguously available memory - which may be less than you need. And of course you may have to be using a 64-bit system to address than much RAM...

    see Malloc for history and details

    call HeapMax in alloc.h to get the largest available block size

    0 讨论(0)
  • 2020-12-02 11:06

    Like everyone else said, getting a 64bit machine is the way to go. But even on a 32bit machine intel machine, you can address bigger than 4gb areas of memory if your OS and your CPU support PAE. Unfortunately, 32bit WinXP does not do this (does 32bit Vista?). Linux lets you do this by default, but you will be limited to 4gb areas, even with mmap() since pointers are still 32bit.

    What you should do though, is let the operating system take care of the memory management for you. Get in an environment that can handle that much RAM, then read the XML file(s) into (a) data structure(s), and let it allocate the space for you. Then operate on the data structure in memory, instead of operating on the XML file itself.

    Even in 64bit systems though, you're not going to have a lot of control over what portions of your program actually sit in RAM, in Cache, or are paged to disk, at least in most instances, since the OS and the MMU handle this themselves.

    0 讨论(0)
  • 2020-12-02 11:07

    Have you considered using memory mapped files? Since you are loading in really huge files, it would seem that this might be the best way to go.

    0 讨论(0)
  • 2020-12-02 11:11

    The advantage of memory mapped files is that you can open a file much bigger than 4Gb (almost infinite on NTFS!) and have multiple <4Gb memory windows into it.
    It's much more efficent than opening a file and reading it into memory,on most operating systems it uses the built-in paging support.

    0 讨论(0)
  • 2020-12-02 11:11

    It depends on whether the OS will give you virtual address space that allows addressing memory above 4GB and whether the compiler supports allocating it using new/malloc.

    For 32-bit Windows you won't be able to get single chunk bigger than 4GB, as the pointer size is 32-bit, thus limiting your virtual address space to 4GB. (You could use Physical Address Extension to get more than 4GB memory; however, I believe you have to map that memory into the virtualaddress space of 4GB yourself)

    For 64-bit Windows, the VC++ compiler supports 64-bit pointers with theoretical limit of the virtual address space to 8TB.

    I suspect the same applies for Linux/gcc - 32-bit does not allow you, whereas 64-bit allows you.

    0 讨论(0)
提交回复
热议问题