For deployment reasons, it is slightly easier for me to use symlinks, but these would be for all of my websites core files and configurations which will be accessed 10’s of thou
Have you measured this performance degredation ? I suspect it's hugely negligible compared to the time taken to fetch pages via the network.
I have created a file testfile.txt
with 1000 lines of blablabla
in it, and created a local symlink (testfile.link.txt
) to it:
$ ls -n
total 12
lrwxrwxrwx 1 1000 1000 12 2012-09-26 14:09 testfile.link.txt -> testfile.txt
-rw-r--r-- 1 1000 1000 10000 2012-09-26 14:08 testfile.txt
(The -n
switch is only there to hide my super-secret username.:))
And then executed 10 rounds of cat
ing into /dev/null
1000 times for both files.
(Results are in seconds.)
Accessing the file directly:
$ for j in `seq 1 10`; do ( time -p ( for i in `seq 1 1000`; do cat testfile.txt >/dev/null; done ) ) 2>&1 | grep 'real'; done
real 2.32
real 2.33
real 2.33
real 2.33
real 2.33
real 2.32
real 2.32
real 2.33
real 2.32
real 2.33
Accessing through symlink:
$ for j in `seq 1 10`; do ( time -p ( for i in `seq 1 1000`; do cat testfile.link.txt >/dev/null; done ) ) 2>&1 | grep 'real'; done
real 2.30
real 2.31
real 2.36
real 2.32
real 2.32
real 2.31
real 2.31
real 2.31
real 2.32
real 2.32
Measured on (a rather old install of) Ubuntu:
$ uname -srvm
Linux 2.6.32-43-generic #97-Ubuntu SMP Wed Sep 5 16:43:09 UTC 2012 i686
Of course it's a dumbed-down example, but based on this I wouldn't expect too much of a performance degradation when using symlinks.
I personally think, that using symlinks is more practical:
my_web_files.v1
, my_web_files.v2
), and use the "official" name in the symlink (e.g. my_web_files
) pointing to the "live" version. If you want to change the version, just re-link to another versioned directory.