What do I have to consider in database design for a new application which should be able to support the most common relational database systems (SQL Server, MyS
95% portable is nearly as good as portable if you can isolate the platform-dependent code into a specific layer. Just as Java has been described as 'Write once test everywhere', one still has to test the application on every platform you intend to run it on.
If you are circumspect with your platform specific code you can use portable code for the 95+% of the functionality that can be done adequately in a portable way. The remaining parts that need to be done in a stored procedure or other platform-dependent construct can be built into a series of platform-dependent modules to a standard interface. Depending on the platform you use the module appropriate to that platform.
This is the difference between 'Test everywhere' and 'Build platform specific modules and Test everywhere'. You will need to test on all supported platforms anyway - you cannot get away from that. The extra build is relatively minor, and probably less than making a really convoluted architecture to try and do these things completely portably.
Rule 1: Don't use database specific features
Rule 2: Don't use stored procedures.
Rule 3: If you break Rule 1, then break rule 2 as well.
There have been a lot of comments about not using stored procedures. This is because the syntax/semantics are very different and so porting them is difficult. You do not want heaps of code that you have rewrite and retest.
If you decide that you do need to use database specific features then you should hide those details behind a stored procedure. Calling the stored procedures of different database is all fairly similar. Inside the procedure, which is written in PL/SQL you can use any Oracle constructs that you find useful. Then you need to write an equivalent for the other target databases. This way, the parts that are database specific are in that database only.
If at all possible I would avoid doing this. I have worked with several of these databases in the past and they were horribly slow (one particularly painful example I can think of was a call center application that took ten minutes to move from one screen to another on a busy day) due to the need to write generic sql and not use the performance tuning that was best for the particular backend.
I currently support Oracle, MySQL, and SQLite. And to be honest it's tough. Some recommendations would be:
Is it worth it... well depends. Commercially it is worth it for enterprise level applications, but for a blog or say a website you might as well stick with one platform if you can.
In complement to this answer, and as a general rule, do not let the server generate or calculate data. Always send straight SQL instructions, excluding formulas. Do not use default value properties (or make them basic, not formulas). Do not use validation rules Both default values and validation rules should be implemented on the client side.
IMO it depends on the type of app you are developing:
For case 1, just pick one DBMS that's best suited to your needs, and code against that, using the full power of all its proprietary features.
For case 2, you will likely find that it is quite feasible to stick to the common subset of operations supported by all DBMSs that you intend to support.