In C/C++ under Linux, I need to allocate a large (several gigabyte) block of memory, in order to store real-time data from a sensor connected to the ethernet port and streaming
If you malloc
the needed amount of memory and write to it at that speed, you'll still get a performance hit due to all the page faults (i.e. mapping each page of virtual memory to physical memory, which also may include swapping out memory of other processes).
In order to avoid that, you could memset
the entire allocated buffer to 0 before you start reading from the sensor, so that all the needed virtual memory is mapped to physical memory.
If you only use the available physical memory, you should suffer no swapping at all. Using more would cause memory of other processes to be swapped to the disk - if these processes are idle, it shouldn't pose any problem. If they're active (i.e. using their memory once in a while), some swapping would occur - probably in a much lower rate than the hard-drive bandwidth. The more memory you use, more active processes' memory would be swapped out, and more HD activity would occur - at this point the maximal amount of memory you could use with decent performance is pretty much a result of trial and error.
By using more than the physical memory available, you'll definitely cause swapping at the rate of memory writes, and there's no way to avoid that.