Handling large databases

前端 未结 14 1647
误落风尘
误落风尘 2021-01-31 00:21

I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance

相关标签:
14条回答
  • 2021-01-31 00:55

    I think its best to keep your OLTP type data denormalized to prevent your core data from getting 'polluted'. That will bite you down the road.

    If the bottle neck is because of reporting or read-only needs, I personally see no problem have denormalized reporting tables in addition to the normalized 'production' tables; create a process to roll up to whatever level you need to make queries snappy. A simple SP or nightly process that periodically rolls up and denormalizes tables used only in a read-only fashion can often make a huge difference in the users experience.

    After all, what good is it to have a theoretically clean, perfectly normalized set of data if no one wants to use your system because it is to slow?

    0 讨论(0)
  • 2021-01-31 00:56

    Interesting... a lot of answers on here..

    Is the rdbms / os version 64 bit?

    Appears to me that the performance degrade is several fold. part of the reason is certainly due to indexing. Have you considered partitioning some of the tables in a manner that's consistent with how the data is stored? Meaning, create partitions based on how the data goes in (based on order). This will give you a lot of performance increase as the majority of the indexes are static.

    Another issue is the xml data. Are you utilizing xml indexes? From books on line (2008) "Using the primary XML index, the following types of secondary indexes are supported: PATH, VALUE, and PROPERTY."

    Lastly, is the system currently designed to run / execute a lot of dynamic sql? If so, you will have degregation from a memory perspecive as plans need to be generated, re generated and seldom resued. I call this memory churn or memory thrashing.

    HTH

    0 讨论(0)
  • 2021-01-31 00:58

    After having analyzed indexes and queries you might want to just by more hardware. A few more gigs of ram might do the trick.

    0 讨论(0)
  • 2021-01-31 00:59

    2 million rows is normally not a Very Large Database, depending on what kind of information you store. Usualy when performance degrades you should verify your indexing strategy. The SQL Server Database Engine Tuning Advisor may be of help there.

    0 讨论(0)
  • 2021-01-31 00:59

    You are right to do whatever works.
    ... as long as you realise that there may be a price to pay later. It sounds like you are thinking about this anyway.

    Things to check:

    Deadlocks

    • Are all processes accessing tables in the same order?

    Slowness

    • Are any queries doing tablescans?
      • Check for large joins (more than 4 tables)
      • Check your indeces

    See my other posts on general performance tips:

    • How do you optimize tables for specific queries?
    • Favourite performance tuning tricks
    0 讨论(0)
  • 2021-01-31 01:02

    A few million records is a tiny database to SQL Server. It can handle terrabytes of data with lots of joins, no sweat. You likely have a design problem or very poorly written queries.

    Kudos for performance testing before you go live. It is a lot harder to fix this stuff after you have been in production for months or years.

    What you did is probably a bad choice. If you denormalize, you need to set up triggers to make sure the data stays in synch. Did you do that? How much did it increase your insert and update time?

    My first guess would be that you didn't put indexes on the foreign keys.

    Other guesses as to what could be wrong include, overuse of things such as: correlated subqueries scalar functions views calling views cursors EAV tables lack of sargability use of select *

    Poor table design can also make it hard to have good performance. For instance, if your tables are too wide, accessing them will be slower. If you are often converting data to another data type in order to use it, then you have it stored incorrectly and this will always be a drag on the system.

    Dynamic SQl may be faster than a stored proc, it may not. There is no one right answer here for performance. For internal security (you do not have to set rights at the table level) and ease of making changes to the database, stored procs are better.

    You need to to run profiler and determine what your slowest queries are. Also look at all the queries that are run very frequently. A small change can pay off big whenteh query is run thosands of times a day.

    You also shoudl go get some books on performance tuning. These will help you through the process as performance problems can be due to many things: Database design Query design Hardware Indexing etc.

    There is no one quick fix and denormalizing randomly can get you in more trouble than not if you don't maintain the data integrity.

    0 讨论(0)
提交回复
热议问题