Create too large array in C++, how to solve?

后端 未结 5 1880
感情败类
感情败类 2021-02-03 12:17

Recently, I work in C++ and I have to create a array[60.000][60.000]. However, i cannot create this array because it\'s too large. I tried float **array

相关标签:
5条回答
  • 2021-02-03 12:29

    Does "60.000" actually mean "60000"? If so, the size of the required memory is 60000 * 60000 * sizeof(float), which is roughly 13.4 GB. A typical 32-bit process is limited to only 2 GB, so it is clear why it doesn't fit.

    On the other hand, I don't see why you shouldn't be able to fit that into a 64-bit process, assuming your machine has enough RAM.

    0 讨论(0)
  • 2021-02-03 12:31

    A matrix of size 60,000 x 60,000 has 3,600,000,000 elements.

    You're using type float so it becomes:

    60,000 x 60,000 * 4 bytes = 14,400,000,000 bytes ~= 13.4 GB
    

    Do you even have that much memory in your machine?


    Note that the issue of stack vs heap doesn't even matter unless you have enough memory to begin with.


    Here's a list of possible problems:

    • You don't have enough memory.
    • If the matrix is declared globally, you'll exceed the maximum size of the binary.
    • If the matrix is declared as a local array, then you will blow your stack.
    • If you're compiling for 32-bit, you have far exceeded the 2GB/4GB addressing limit.
    0 讨论(0)
  • 2021-02-03 12:38

    I had this problem too. I did a workaround where I chopped the array into sections (my biggest allowed array was float A_sub_matrix_20[62944560]). When I declared just one of these in main(), it seems to be put in RAM as I got a runtime exception as soon as main() starts. I was able to declare 20 buffers of that size as global variables which works (looks like in global form they are stored on the HDD - when I added A_sub_matrix_20[n] to the watch list in VisualStudio it gave a message "reading from file").

    0 讨论(0)
  • 2021-02-03 12:54

    To initialise the 2D array of floats that you want, you will need:

    60000 * 60000 * 4 bytes = 14400000000 bytes

    Which is approximately 14GB of memory. That's a LOT of memory. To even hold that theoretically, you will need to be running a 64bit machine, not to mention one with quite a bit of RAM installed.

    Furthermore, allocating this much memory is almost never necessary in most situations, are you sure no optimisations could be made here?

    EDIT:

    In light of new information from your comments on other answers: You only have 4GB memory (RAM). Your operating system is hence going to have to page at least 9GB on the Hard Drive, in reality probably more. But you also only have 20GB of Hard Drive space. This is barely enough to page all that data, especially if the disk is fragmented. Finally, (I could be wrong because you haven't stated explicitly) it is quite possible that you're running a 32bit machine. This isn't really capable of handling more than 4GB of memory at a time.

    0 讨论(0)
  • 2021-02-03 12:56

    Allocate the memory at runtime -- consider using a memory mapped file as the backing. Like everyone says, 14 gigs is a lot of memory. But it's not unreasonable to find a computer with 14GB of memory, nor is it unreasonable to page the memory as necessary.

    With a matrix of this size, you will likely become very curious about memory access performance. Remember to consider the cache grain of your target architecture and if your target has a TLB you may be able to use larger pages to relieve some TLB pressure. Then again, if you don't have enough memory you'll likely care only about how fast your storage I/O is.

    If it's not already obvious, you'll need an architecture that supports a 64-bit address space in order to access this memory directly/conveniently.

    0 讨论(0)
提交回复
热议问题