If our organisation were to switch from a central-server VCS like subversion to a distributed VCS like git, how do I make sure that all my code is safe from hardware failure
I think that you will find that in practice developers will prefer to use a central repository than pushing and pulling between each other's local repositories. Once you've cloned a central repository, while working on any tracking branches, fetching and pushing are trivial commands. Adding half a dozen remotes to all your colleagues' local repositories is a pain and these repositories may not always be accessible (switched off, on a laptop taken home, etc.).
At some point, if you are all working on the same project, all the work needs to be integrated. This means that you need an integration branch where all the changes come together. This naturally needs to be somewhere accessible by all the developers, it doesn't belong, for example, on the lead developer's laptop.
Once you've set up a central repository you can use a cvs/svn style workflow to check in and update. cvs update becomes git fetch and rebase if you have local changes or just git pull if you don't. cvs commit becomes git commit and git push.
With this setup you are in a similar position with your fully centralized VCS system. Once developers submit their changes (git push), which they need to do to be visible to the rest of the team, they are on the central server and will be backed up.
What takes discipline in both cases is preventing developers keeping long running changes out of the central repository. Most of us have probably worked in a situation where one developer is working on feature 'x' which needs a fundamental change in some core code. The change will cause everyone else to need to completely rebuild but the feature isn't ready for the main stream yet so he just keeps it checked out until a suitable point in time.
The situation is very similar in both situations although there are some practical differences. Using git, because you get to perform local commits and can manage local history, the need to push to the central repository may not be felt as much by the individual developer as with something like cvs.
On the other hand, the use of local commits can be used as an advantage. Pushing all local commits to a safe place on the central repository should not be very difficult. Local branches can be stored in a developer specific tag namespace.
For example, for Joe Bloggs, An alias could be made in his local repository to perform something like the following in response to (e.g.) git mybackup
.
git push origin +refs/heads/*:refs/jbloggs/*
This is a single command that can be used at any point (such as the end of the day) to make sure that all his local changes are safely backed up.
This helps with all sorts of disasters. Joe's machine blows up and he can use another machine and fetch is saved commits and carry on from where he left off. Joe's ill? Fred can fetch Joe's branches to grab that 'must have' fix that he made yesterday but didn't have a chance to test against master.
To go back to the original question. Does there need to be a difference between dVCS and centralized VCS? You say that half-implemented features and bugfixes will not end up on the central repository in the dVCS case but I would contend that there need be no difference.
I have seen many cases where a half-implemented feature stays on one developers working box when using centralized VCS. It either takes a policy that allows half written features to be checked in to the main stream or a decision has to be made to create a central branch.
In the dVCS the same thing can happen, but the same decision should be made. If there is important but incomplete work, it needs to be saved centrally. The advantage of git is that creating this central branch is almost trivial.