Code defaults are easier for unit testing.
Code defaults support multiple scenarios. DB column defaults are a one size fits all. For example, the DB column defaults may vary depending on customer type.
DB column defaults are often opaque to the maintenance developer because they are far away from the INSERT statement, which is usually in the middle tier code, stored procedure, etc. The presence or absence of defaults can be surprising, either way.
DB column defaults defend the database against clients that are too lazy to fill in defaults at all, which is a form of data corruption.
Both kinds of defaults can be subverted by client developers. It is easier to set up barriers to developers using defective defaults in the middle tier. AFAIK, no database lets you require a field take the default value on INSERT. (Edit: a TRIGGER could enforce this but you'd have to copy the default to your trigger and the TRIGGER would overwrite any inserted values with a default) An example of where this might matter is the various tokens people use for a unknown but future date, or an unknown but past date, or it might matter if they use GETDATE() which includes time or if they use a default date with year, month, day and no time.
I'd recommend making sure defaults exist where they make sense in the DB, but don't actually use them. The DB defaults should be the defaults of last resort, and defaults should be solidly in a middle tier (i.e. stored procs, a data access layer). A DB column default is like an exception handler-- when someone forgot by accident to provide a value, what value should be used to prevent data corruption?