I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance
As the old saying goes "normalize till it hurts, denormalise till it works".
I love this one! This is typically the kind of thing that must not be accepted anymore. I can imagine that, back at DBASEIII
times, where you could not open more than 4 tables at a time (unless changing some of your AUTOEXEC.BAT parameters AND rebooting your computer, ahah! ...), there was some interest in denormalisation.
But nowadays I see this solution similar to a gardener waiting for a tsunami to water his lawn. Please use the available watering can (SQL profiler).
And don't forget that each time you denormalize part of your database, your capacity to further adapt it decreases, as risks of bugs in code increases, making the whole system less and less sustainable.
That may not be the right decision. Identify all your DB interactions and profile them independently, then find the offending ones and strategize to maximize performance there. Also turning on the audit logs on your DB and mining them might provide better optimization points.
There can be a million reasons for that; use SQL Profiler and Query analyzer to determine why your queries are getting slow before going down the "schema change" road. It is not unlikely that all you need to do is create a couple of indexes and schedule "update statistics"... ...but as I said, Profiler and Query Analyzer are the best tools for finding out what is going on...
We've always tried to develop using a database that is as close to the "real world" as possible. That way you avoid a lot of gotcha's like this one, since any ol' developer would go mental if his connection kept timing out during debugging. The best way to debug Sql performance problems IMO is what Mitch Wheat suggest; profile to find the offending scripts and start with them. Optimizing scripts can take you far and then you need to look at indexes. Also make sure that you Sql Server has enought horsepower, especially IO (disk) is important. And don't forget; cache is king. Memory is cheap; buy more. :)
In the scheme of things, a few million rows is not a particulary large Database.
Assuming we are talking about an OLTP database, denormalising without first identifying the root cause of your bottlenecks is a very, very bad idea.
The first thing you need to do is profile your query workload over a representative time period to identify where most of the work is being done (for instance, using SQL Profiler, if you are using SQL Server). Look at the number of logical reads a query performs multiplied by the number of times executed. Once you have identified the top ten worst performing queries, you need to examine the query execution plans in detail.
I'm going to go out on a limb here (because it is usually the case), but I would be surprised if your problem is not either
This SO answer describes how to profile to find the worst performing queries in a workload.
First make sure your database is reasonably healthy, run DBCC DBREINDEX on it if possible, DBCC INDEXDEFRAG and update statistics if you can't afford the performance hit.
Run Profiler for a reasonable sample time, enough to capture most of the typical functions, but filter on duration greater than something like 10 seconds, you don't care about the things that only take a few milliseconds, don't even look at those.
Now that you have your longest running queries, tune the snot out of them; get the ones that show up the most, look at the execution plans in Query Analyzer, take some time to understand them, add indexes where necessary to speed retrieval
look at creating covered indexes; change the app if needed if it's doing SELECT * FROM... when it only needs SELECT LASTNAME, FIRSTNAME....
Repeat the profiler sampling, with duration of 5 seconds, 3 seconds, etc. until performance meets your expectations.