Handling large databases

前端 未结 14 1684
误落风尘
误落风尘 2021-01-31 00:21

I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance

14条回答
  •  盖世英雄少女心
    2021-01-31 01:13

    In the scheme of things, a few million rows is not a particulary large Database.

    Assuming we are talking about an OLTP database, denormalising without first identifying the root cause of your bottlenecks is a very, very bad idea.

    The first thing you need to do is profile your query workload over a representative time period to identify where most of the work is being done (for instance, using SQL Profiler, if you are using SQL Server). Look at the number of logical reads a query performs multiplied by the number of times executed. Once you have identified the top ten worst performing queries, you need to examine the query execution plans in detail.

    I'm going to go out on a limb here (because it is usually the case), but I would be surprised if your problem is not either

    1. Absence of the 'right' covering indexes for the costly queries
    2. Poorly configured or under specified disk subsystem

    This SO answer describes how to profile to find the worst performing queries in a workload.

提交回复
热议问题