An answer (see below) to one of the questions right here on Stack Overflow gave me an idea for a great little piece of software that could be in
My final solution to the problem is vmtouch: https://hoytech.com/vmtouch/ This tool locks the current folder into (ram) cache and vmtouch daemonizes into background.
sudo vmtouch -d -L ./
Put this in shell rc for fast access:
alias cacheThis = 'sudo vmtouch -d -L ./'
I searched for a ready made script for quite a while, because I didn't want to waste a lot of time on writing my own ramdisk-rsync-script. I'm sure I would have missed some edge cases, which would be quite unpleasant if important code was involved. And I never liked the polling approach.
Vmtouch seems like the perfect solution. In addition it doesn't waste memory like a fixed size ramdisk does. I didn't do a benchmark, because 90% of my 1Gig source+build folder were already cached, but at least it feels faster ;)
I don't have exactly what you're looking for, but I'm now using a combination of Ramdisk and DRAM ramdisk. Since this is Windows, I have a hard 3 GB limit for core memory, meaning I cannot use too much memory for a RAM disk. 4 GB extra on the 9010 really rocks it. I let my IDE store all its temporary stuff on the solid state RAM disk and also the Maven repository. The DRAM RAM disk has a battery backup to the flash card. This sounds like an advertisement, but it really is an excellent setup.
The DRAM disk has double SATA-300 ports and comes out with 0.0 ms average seek on most tests ;) Something for the Christmas stocking?
This sounds like disk caching which your operating system and / or your hard drive will handle for you automatically (to varying degrees of performance, admittedly).
My advice is, if you don't like the speed of your drive, buy a high speed drive purely for compiling purposes. Less labor on your part and you might have the solution to your compiling woes.
Since this question was originally asked, spinning hard disks have become miserable tortoises when compared to SSDs. They are very close to the originally requested RAM disk in a SKU that you can purchase from Newegg or Amazon.
Profile. Make sure you do good measurements of each option. You can even buy things you've already rejected, measure them, and return them, so you know you're working from good data.
Get a lot of RAM. 2 GB DIMMs are very cheap; 4 GB DIMMs are a little over US$100/ea, but that's still not a lot of money compared to what computer parts cost just a few years ago. Whether you end up with a RAM disk or just letting the OS do its thing, this will help. If you're running 32-bit Windows, you'll need to switch to 64-bit to make use of anything over 3 GB or so.
Live Mesh can synchronize from your local RAM drive to the cloud or to another computer, giving you an up-to-date backup.
Move just compiler outputs. Keep your source code on the real physical disk, but direct .obj, .dll, and .exe files to be created on the RAM drive.
Consider a DVCS. Clone from the real drive to a new repository on the RAM drive. "push" your changes back to the parent often, say every time all your tests pass.
I wonder if you could build something like a software RAID 1 where you have a physical disk/partition as a member, and a chunk of RAM as a member.
I bet with a bit of tweaking and some really weird configuration one could get Linux to do this. I am not convinced that it would be worth the effort though.
Just as James Curran says, the fact that most programs follow the law of locality of references, the frequent code and data page count will be narrowed over time to a manageable size by the OS disk cache.
RAM disks were useful when operating systems were built with limitations such as stupid caches (Win 3.x, Win 95, DOS). The RAM disk advantage is near zero and if you assign a lot of RAM it will suck memory available to the system cache manager, hurting overall system performance. The rule of thumb is: let your kernel to do that. This is the same as the "memory defragmentation" or "optimizers" programs: they actually force pages out of cache (so you get more RAM eventually), but causing the system to do a lot of page-faulting over time when your loaded programs begin to ask for code/data that was paged out.
So for more performance, get a fast disk I/O hardware subsystem, maybe RAID, faster CPU, better chipset (no VIA!), more physical RAM, etc.