If our organisation were to switch from a central-server VCS like subversion to a distributed VCS like git, how do I make sure that all my code is safe from hardware failure
All developers on your team can have their own branches on the server as well (can be per ticket or just per dev, etc). This way they don't break the build in master branch but they still get to push their work in progress to the server that gets backed up.
My own git_remote_branch tool may come in handy for that kind of workflow (Note that it requires Ruby). It helps manipulating remote branches.
As a side note, talking about repo safety, on your server you can set up a post-commit hook that does a simple git clone or git push to another machine... You get an up to date backup after each commit!
We use rsync to backup the individual developers .git directories to a directory on the server. This is setup using wrapper scripts around git clone, and the post-commit etc. hooks.
Because it is done in the post-* hooks, developers don't need to remember to do it manually. And because we use rsync with a timeout, if the server goes down or the user is working remotely, they can still work.
I think it's a fallacy that using a distributed VCS necessarily means that you must use it in a completely distributed fashion. It's completely valid to set up a common git repository and tell everybody that repository is the official one. For normal development workflow, developers would pull changes from the common repository and update their own repositories. Only in the case of two developers actively collaborating on a specific feature might they need to pull changes directly from each other.
With more than a few developers working on a project, it would be seriously tedious to have to remember to pull changes from everybody else. What would you do if you didn't have a central repository?
At work we have a backup solution that backs up everybody's working directories daily, and writes the whole lot to a DVD weekly. So, although we have a central repository, each individual one is backed up too.
It's not uncommon to use a "central" server as an authority in DVCS, which also provides you the place to do your backups.
You could have developer home directories mount remote devices over the local network. Then you only have to worry about making the network storage safe. Or maybe you could use something like DropBox to copy your local repo elsewhere seamlessly.
I find this question to be a little bit bizarre. Assuming you're using a non-distributed version control system, such as CVS, you will have a repository on the central server and work in progress on developers' servers. How do you back up the repository? How do you back up developers' work in progress? The answer to those questions is exactly what you have to do to handle your question.
Using distributed version control, repositories on developers' servers are just work in progress. Do you want to back it up? Then back it up! It's as simple as that.
We have an automated backup system that grabs any directories off our our machines which we specify, so I add any repositories and working copies on my machine to that last, including both git and CVS repositories.
By the way, if you are using distributed version control in a company releasing a product, then you will have a central repository. It's the one you release from. It might not be on a special server; it might be on some developer's hard drive. But the repository you release from is the central repository. (I suppose if you haven't released, yet, you might not have one, yet.) I kind of feel that all projects have one or more central repositories. (And really if they have more than one, it's two projects and one is a fork.) This goes for open source as well.
Even if you didn't have a central repository, the solution is the same: back up work on developer's machines. You should have been doing that anyway. The fact that the work in progress is in distributed repositories instead of CVS working copies or straight nonversioned directories is immaterial.