I am doing a sort_by_key
with key-value int arrays of size 80 million.
The device is a GTX 560 Ti with 2GB VRAM. When the avai
thrust::sort_by_key indeed allocates temporary space of O(N) -- radix sort is not an in-place sort when it is larger than can be done by a single multiprocessor. Therefore you need at least 80M * 2 * sizeof(int) = 640MB for the input data, plus space for the temporaries, which must be at least 320MB for this sort. I'm not sure exactly why the sort doesn't just fail when you don't have enough memory -- perhaps 600 MB is a low estimate, or perhaps thrust is falling back to CPU execution (I doubt it does that).
Another idea about the performance drop is that when you need almost all of the available memory, there might be a bit of fragmentation in the available memory that the driver/runtime has to deal with in order to allocate such large arrays, causing extra overhead.
BTW, how are you measuring available memory?