An answer (see below) to one of the questions right here on Stack Overflow gave me an idea for a great little piece of software that could be in
Yep, I've met the same problem. And after fruitless googling I just wrote a Windows Service for lazy backing up the RAM drive (actually - any folder, because RAM drive can be mounted in to, for example, the desktop).
http://bitbucket.org/xkip/transparentbackup You can specify interval for full scan (default 5 minutes). And an interval for scanning only notified files (default 30 seconds). Scan detects changed files using the 'archive' attribute (the OS resets that one specially for archiving purpose). Only files modified that way are backed up.
The service leaves a special marker file to make sure that target backup is exactly a backup of the source. If the source is empty and does not contain a marker file, the service performs automatic restore from backup. So, you can easily destroy the RAM drive and create it again with automatic data restoration. It is better to use a RAM drive that is able to create a partition on system start up to make it work transparently.
Another solution that I've recently detected is SuperSpeed SuperCache.
This company also has a RAM disk, but that is another software. SuperCache allows you use extra RAM for block-level caching (it is very different from file caching), and another option - mirror you drive to RAM completely. In any scenario you can specify how often to drop dirty blocks back to the hard disk drive, making writes like on the RAM drive, but the mirror scenario also makes reads like from the RAM drive. You can create a small partition, for example, 2 GB (using Windows) and map the entire partition to RAM.
One interesting and very useful thing about that solution - you can change caching and mirroring options any time just instantly with two clicks. For example, if you want your 2 GB back for gamimg or virtual machine - you can just stop mirroring instantly and release memory back. Even opened file handles does not break - the partition continues to work, but as a usual drive.
EDIT: I also highly recommend you move the TEMP folder to te RAM drive, because compilers usually make a lot of work with temp. In my case it gave me another 30% of compilation speed.
Some ideas off the top of my head:
Use Sysinternals' Process Monitor (not Process Explorer) to check what goes on during a build - this will let you see if %temp%
is used, for instance (keep in mind that response files are probably created with FILE_ATTRIBUTE_TEMPORARY which should prevent disk writes if possible, though). I've moved my %TEMP%
to a RAM disk, and that gives me minor speedups in general.
Get a RAM disk that supports automatically loading/saving disk images, so you don't have to use boot scripts to do this. Sequential read/write of a single disk image is faster than syncing a lot of small files.
Place your often-used/large header files on the RAM disk, and override your compiler standard paths to use the RAM drive copies. It will likely not give that much of an improvement after first-time builds, though, as the OS caches the standard headers.
Keep your source files on your harddrive, and sync to the RAM disk - not the other way around. Check out MirrorFolder for doing realtime synchronization between folders - it achieves this via a filter driver, so only synchronizes what is necessary (and only does changes - a 4 KB write to a 2 GB file will only cause a 4 KB write to the target folder). Figure out how to make your IDE build from the RAM drive although the source files are on your harddisk... and keep in mind that you'll need a large RAM drive for large projects.
In Linux (you never mentioned which OS you're on, so this could be relevant) you can create block devices from RAM and mount them like any other block device (that is, a HDD).
You can then create scripts that copy to and from that drive on start-up / shutdown, as well as periodically.
For example, you could set it up so you had ~/code
and ~/code-real
. Your RAM block gets mounted at ~/code
on startup, and then everything from ~/code-real
(which is on your standard hard drive) gets copied over. On shutdown everything would be copied (rsync'd would be faster) back from ~/code
to ~/code-real
. You would also probably want that script to run periodically, so you didn't lose much work in the event of a power failure, etc.
I don't do this anymore (I used it for Opera when the 9.5 beta was slow, no need anymore).
Here is how to create a RAM disk in Linux.
Use https://wiki.archlinux.org/index.php/Ramdisk to make the RAM disk.
Then I wrote these scripts to move directories to and from the RAM disk. Backup is made in a tar file before moving into the RAM disk. The benefit of doing it this way is that the path stays the same, so all your configuration files don't need to change. When you are done, use uramdir
to bring back to disk.
Edit: Added C code that will run any command it is given on an interval in background. I am sending it tar
with --update
to update the archive if any changes.
I believe this general-purpose solution beats making a unique solution to something very simple. KISS
Make sure you change path to rdbackupd
#!/bin/bash
# May need some error checking for bad input.
# Convert relative path to absolute
# /bin/pwd gets real path without symbolic link on my system and pwd
# keeps symbolic link. You may need to change it to suit your needs.
somedir=`cd $1; /bin/pwd`;
somedirparent=`dirname $somedir`
# Backup directory
/bin/tar cf $somedir.tar $somedir
# Copy, tried move like https://wiki.archlinux.org/index.php/Ramdisk
# suggests, but I got an error.
mkdir -p /mnt/ramdisk$somedir
/bin/cp -r $somedir /mnt/ramdisk$somedirparent
# Remove directory
/bin/rm -r $somedir
# Create symbolic link. It needs to be in parent of given folder.
/bin/ln -s /mnt/ramdisk$somedir $somedirparent
#Run updater
~/bin/rdbackupd "/bin/tar -uf $somedir.tar $somedir" &
#!/bin/bash
#Convert relative path to absolute
#somepath would probably make more sense
# pwd and not /bin/pwd so we get a symbolic path.
somedir=`cd $1; pwd`;
# Remove symbolic link
rm $somedir
# Copy dir back
/bin/cp -r /mnt/ramdisk$somedir $somedir
# Remove from ramdisk
/bin/rm -r /mnt/ramdisk$somedir
# Stop
killall rdbackupd
rdbackupd.cpp
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <signal.h>
#include <sys/time.h>
struct itimerval it;
char* command;
void update_archive(int sig)
{
system(command);
}
int main(int argc, char**argv)
{
it.it_value.tv_sec = 1; // Start right now
it.it_value.tv_usec = 0;
it.it_interval.tv_sec = 60; // Run every 60 seconds
it.it_interval.tv_usec = 0;
if (argc < 2)
{
printf("rdbackupd: Need command to run\n");
return 1;
}
command = argv[1];
signal(SIGALRM, update_archive);
setitimer(ITIMER_REAL, &it, NULL); // Start
while(true);
return 0;
}
I had the same idea and did some research. I found the following tools that do what you are looking for:
However, the second one I couldn't manage to get working on 64-bit Windows 7 at all, and it doesn't seem to be maintained at the moment.
The VSuite RAM disk on the other hands works very well. Unfortunately I couldn't measure any significant performance boost compared to the SSD disc in place.
There are plenty RAMDrives around, use one of those. Sorry, that would be reckless.
Only if you work entirely in the RAM disc, which is silly..
Psuedo-ish shell script, ramMake:
# setup locations
$ramdrive = /Volumes/ramspace
$project = $HOME/code/someproject
# ..create ram drive..
# sync project directory to RAM drive
rsync -av $project $ramdrive
# build
cd $ramdrive
make
#optional, copy the built data to the project directory:
rsync $ramdrive/build $project/build
That said, your compiler can possibly do this with no additional scripts.. Just change your build output location to a RAM disc, for example in Xcode, it's under Preferences, Building, "Place Build Products in:" and "Place Intermediate Build Files in:".