I have an algorithm which currently allocates a very large array of doubles, which it updates and searches frequently. The size of the array is N^2/2, where N is the number of
If you're running on PCs, page sizes for mapped files are likely to be 4 kilobytes.
So the question really starts from if I start swapping the data out to disk, "how random is my random access to the RAM-that-is-now-a-file"?
And (...can I and if so...) how can I order the doubles to maximise cases where doubles within a 4K page are accessed together rather than a few at a time in each page before the next 4K disk fetch?
If you use standard IO, you probably still want to read and write in chunks but ther chunks could be smaller. Sectors will be at least 512 bytes, disk clusters bigger, but what size of read is best given that there is a kernel round trip overhead for each IO?
I'm sorry but I'm afraid your best next steps depend to a great extent on the algorithm and the data you are using.