I have designed databases several times in my company. To increase the performance of the database, I look for Normalisation and Indexing only.
If you were asked to incr
That's a very vague question.
You say you look for indexing, but you can't look at indexing in isolation. You have to look at the queries that are being run, the execution plans, the indexes that are being used and how they are being used. The Profiler tool can help a great deal in determining which queries are inefficient.
Aside from that - make sure a maintenance plan is set up. You should be updating statistics and defragmenting/rebuilding indexes at least once a week in a heavy transactional database.
If you have the infrastructure, look at your file and filegroup settings. You should try to put tables and/or indexes that are large and frequently used on different physical drives, if possible. If you have any very large tables, you might think of partitioning them.
If you're still having performance problems, denormalization can sometimes help - but it all depends on the situation.
I'm going to stop there - don't want this answer to become the world's most random list of SQL performance tips. I recommend you be more specific about where you think the performance issues are, and tell us a bit more about the database (size, current indexing strategy, transaction frequency, any large reports you need to generate, etc.)
Optimize the logical design
The logical level is about the structure of the query and tables themselves. Try to maximize this first. The goal is to access as few data as possible at the logical level.
Optimize the physical design
The physical level deals with non-logical consideration, such as type of indexes, parameters of the tables, etc. Goal is to optimize the IO which is always the bottleneck. Tune each table to fit it's need. Small table can be loaded permanently loaded in the DBMS cache, table with low write rate can have different settings than table with high update rate to take less disk spaces, etc. Depending on the queries, different index can be used, etc. You can denormalized data transparently with materialized views, etc.
Try first to improve the logical design, then the physical design. (The boundary between both is however vague, so we can argue about my categorization).
Optimize the maintenance
Database must be operated correctly to stay as efficient as possible. This include a few mainteanance taks that can have impact on the perofrmance, e.g.
To your toolkit of normalisation and indexing, with extremely large tables you may also want to consider the pros and cons of partioning the tables. But you've got the key ones there already.
In order to increase performance you will need to monitor your database first. You can trace and then load it in sql server profiler to find out which are the slowest queries. After that you can concentrate on them.
You can also use dynamic views and management function to find out which indexes are missing. You will also be able to retrieve statistics about existing indexes such as index usage and missed indexes.