How to increase the performance of a Database?

后端 未结 10 2265
我在风中等你
我在风中等你 2021-02-14 15:15

I have designed databases several times in my company. To increase the performance of the database, I look for Normalisation and Indexing only.

If you were asked to incr

相关标签:
10条回答
  • 2021-02-14 15:54

    Compression. For the vast majority of loads I've tried, using compression was a tremendous free ride. Reduced data size means reduced I/O means better throughput. In SQL Server 2005 the compression options are limited (vardecimal). But I would seriously consider upgrading to 2008 for page compression alone. Or 2008 R2 if you use nvarchar frequently to get Unicode compression.

    Data Retention. Establishing retention policies and deleting old data aggressively. Less data means less I/O, means better throughput. Often this is seen as operational, not design, but I like to think at this issue as an application design issue.

    Of course, I assume you already monitor each and every query to ensure none does stupid end-to-end table scans.

    Many more performance boosters are mostly operational or deployment, not design: maintenance (defragmentation, index rebuild etc), I/O and storage design etc.

    And last but not least understand the hidden cost of various turn-key solutions. Like, say, Replication, or Database Mirroring.

    0 讨论(0)
  • 2021-02-14 15:55

    We haven't written about one performance bit:

    Hardware.

    Databases are intensely I/O driven. Moving to a faster hard drive should increase the speed of database queries. Splitting the database among many fast hard drives might improve it even more.

    0 讨论(0)
  • 2021-02-14 15:56

    Optimizing the queries that are used to access that database is most important. Just by adding indexes you don't guarantee that queries will use them.

    0 讨论(0)
  • 2021-02-14 15:57

    If a query is extremely mission-critical, you may want to consider de-normalizing, to reduce the number of table-lookups per query. Aside from that, if you need more performance beyond what indexing and de-normalizing can perform, you might want to look program-side: caching, optimizing queries/stored-procedures, etc.

    0 讨论(0)
  • 2021-02-14 15:58

    My roll at MySpace was "Performance Enhancement DBA/Developer". I would say that Normalization and Indexes are a requirement in high performance databases, but you must really analyze your table structures and indexes to truly unlock the powers of database design.

    Here are a few suggestions I would have for you;

    1. Get to know the DB Engine. A through knowledge of the underlining I/O structure goes a very long way in designing a proper index or table. Using PerfMon and Profiler, alongside your knowledge of what Read/Write I/Os are, you can put some very specific numbers behind your theory of what is a well-formed table / index solution.

    2. Understand the difference between Clustered and Non-Clustered indexes and when to use which.

    3. Use sys.dm_os_waiting_tasks and the sys.dm_os_wait_stats DMVs. They will tell you where you should put your effort into reducing wait-time.

    4. Use DBCC SET STATISTICS IO/TIME ON, and evaluate your execution plans to see if one query reduces or increases the number of page reads or duration.

    5. DBCC SHOWCONTIG will tell you if your tables are heavily fragmented. This is often neglected by developers and Jr. DBAs from a performance point of view - however, this can have a very BIG effect on the number of page-reads you have. If a table has 20% extent page density, that means you're reading about 5 times the data you otherwise would be if the table and it's indexes were defragmented.

    6. Evaluate dirty-reads ( nolock, read uncommited ). If you could do away with millisecond-precision on reads, save the locks!

    7. Consider taking out unnecessary Foreign Keys. They're useful in Dev environments, not on high-performance transactional systems.

    8. Partitions in large tables make a big difference - only if properly designed.

    9. Application changes - If you could schedule batch updates for asynchronous transactions, put them into an index-free heap and process it on schedule so that you don't constently update the tables which you query heavily.

    10. Always Always Always!!! use the same data type variable to query the target columns; For example, the following statement uses a bigint variable for a smallint column:

    declare @i bigint set @i = 0

    select * from MyTable where Col01SmallInt >= @i

    In the process of evaluating index / table pages, the query engine may opt to convert your smallint column data to bigint data type. Consider instead, changing your varialbe type, or at-least converting it to smallint in your search condition.

    1. SQL 2005/08 gives you "Reports" in the Management Application, take a look at reports on how your indexes are performing. Are they being Scanned, Seeked? when was your last Table Scan? If it was recent, you indexes are not fulfilling all necessary queries. If you have an index that is hardly being used (seeked or scaned) but is constantly being updated, consider dropping it.. It may save you a lot of unnecessary row-locks and key-locks. ..

    That's all I can think of off the top of my head. If you run into a more specific problem, I would have a more specific answer for you..

    0 讨论(0)
  • 2021-02-14 16:00

    There are many things you could do, a lot of them already suggested above. Some that I would look at (in this order):

    • Errors/logs - many db engines have reporting tools that point out problem areas in a database. Start here to see if there's anything you can focus on right away.
    • Data retention - check business specification how long data should be kept for, make sure any older data is moved off to a data warehouse to keep table size small. (Why keep 5 years of data if only need last 3 months?)
    • Look for table scans, index the data if it will help (you have to gauge this one against table writes). Your server logs can probably help you with finding table scans.
    • Atomic elements of work, are some writes keeping locks on different tables before a commit point is reached? Can those elements of work be simplified or commit points moved to speed up performance? This is where you will need a developer to look at the code.
    • Look for long running SQL statements, can it be made more efficient? Sometimes poorly structured queries can really bog an application down. You may need to suggest a coding change to improve performance.
    • dba realm: look at how tables are allocated: page size, multiple segments etc. This is where diagnostics tools from the vendor come in handy, as they can often suggest how you can structure a table based on usage history. An experienced dba would be useful here.
    • look for hardware/network bottlenecks. This is where you would need a hardware guy. :)

    These are really high level, I would also take a look at what the vendor of your db engine suggests as performance improvements.

    Also, I would gauge a list like this against what my boss is willing to pay for and how much time I have. ;)

    Hope this helps.

    0 讨论(0)
提交回复
热议问题