问题
I need to create a matrix which size is 10000x100000. My RAM is 4GB. It works till the 25th iteration (debug), but after 25th iteration I get an error "bad allocation" however only 25% of RAM is used which means the problem is not related with memory. So what can I do?
EDIT:
int **arr;
arr=new int*[10000];
for(i=0;i<10000;i++)
arr[i]=new int[100000];
My allocation is above.
回答1:
If you're compiling for x64, you shouldn't have any problems.
If you're compiling for x86 (most likely), you can enable the /LARGEADDRESSAWARE linker flag if you're using Visual C++, or something similar for other compilers. For Visual C++, the option can also be found in the Linker -> System -> Enable Large Addresses property in the IDE.
This sets a flag in the resulting EXE file telling the OS that the code can handle addresses over 2 GB. When running such an executable on x64 Windows (your case), the OS gives it 4 GB of address space to play with, as opposed to just 2 GB normally.
I tested your code on my system, Windows 7 x64, 8 GB, compiled with Visual C++ Express 2013 (x86, of course) with the linker flag, and the code ran fine - allocated almost 4 GB with no error.
Anyway, the 25th iteration is far too quick for it to fail, regardless of where it runs and how it's compiled (it's roughly 10 MB), so there's something else going wrong in there.
By the way, the HEAP
linker option doesn't help in this case, as it doesn't increase the maximum heap size, it just specifies how much address space to reserve initially and in what chunks to increase the amount of committed RAM. In short, it's mostly for optimization purposes.
回答2:
A possible solution would be to use your hard drive. just open a file and store the data you need. then just copy the data you need to a buffer. Even if you will be successful with allocating this amount of data on the heap you will overload the heap with data you are most likely wont be using most of the time. Eventually you might run out of space and that will lead to either decreased performance or unexpected behaviors. If you are worried about hindered performance by using the hard drive maybe to your problem thinking about a procedural solution would be fitting. If you could produce the data you need at any given moment instead of storing it you could solve your problem as well.
回答3:
If you are using VS, you'll probably want to try out the HEAP linker option and make sure, you compile for a x64 bit target, because otherwise you'll run out of address space. The size of you physical memory should not be a limiting factor, as windows can use the pagefile to provide additional memory. However, from a performance point of view it is probably a horrible Idea to just allocate a matrix of this size. Maybe you should consider using a spares matrix, or (as suggested by LifePhilPsyPro) generate the data on demand.
回答4:
For allocating extremely large buffers you are best off using the operating system services for mapping pages to the address space rather than new/malloc.
You are trying to allocate more than 4GB.
来源:https://stackoverflow.com/questions/27451481/how-to-declare-10000-x-100000-sized-integer-matrix-in-c