tips for optimizing a read-only sql database

后端 未结 7 2131

I have a mid-sized SQL Server 2008 database that has actuarial data in it. All of the use cases for it are read-only queries. Are there any special optimizations I should cons

相关标签:
7条回答
  • 2021-02-08 11:18

    For a read-only table, consider altering the indexes to use a fill factor of 100%.

    This will increase the amount of data on each data page. More data per page, fewer pages to read, less I/O, thus better performance.

    I like this option because it improves performance without code changes or table changes.

    0 讨论(0)
  • 2021-02-08 11:25

    If it is read only, one thing that you can do is put indexes on just about anything that might help (space permitting). Normally adding an index is a trade-off between a performance hit to writes and a performance gain for reads. If you get rid of the writes it's no longer a trade-off.

    When you load the database you would want to drop all/most of the indexes, perform the load, then put the indexes back on the tables.

    0 讨论(0)
  • 2021-02-08 11:27

    One strategy is to add a readonly filegroup to your DB, and put your readonly tables there. A readonly filegroup allows SQL Server to make a number of optimizations, including things like eliminating all locks.

    In addition to standard DB optimization:

    1. Make sure all tables and indexes have zero fragmentation
    2. Consider adding indexes that you may have otherwise avoided due to excessive update costs
    0 讨论(0)
  • 2021-02-08 11:30

    For performance tuning there are several things you can do. Denormailzation works. Proper clustered indexes dependent on how the data will be queried. I don't recommend using a nolock hint. I'd use snapshot isolation level.

    It's also important on how your database is laid out on the disks. For read only performance, I'd recommend Raid 10, with separate mdf's and ldf's to isolated spindles. Normally, for a production database it would be Raid 5 for data and Raid 1 for logs. Make sure you have a tempdb file for each cpu, used for sorting, a good starting size is 5gb data and 1 gb log for each cpu. Also make sure you run your queries or procs through showplan to help optimize them as well as possible. Ensure that parallelism is on in the server settings.

    Also if you have the time and space for optimal performance, I'd map out exactly where the data lives on the disks, creating file groups and putting them on completely separate volumes that are isolated disks in each volume.

    0 讨论(0)
  • 2021-02-08 11:38

    I'm not sure what you consider "normal rules", but here's some suggestions.

    • If you're 100% certain it's read-only, you can set the transaction isolation level to READ_UNCOMMITTED. This is the fastest possible read setting, but it will lead to phantom reads and dirty reads if you are writing to the tables.

    • If you have Views, use Indexed Views (create clustered indexes for them). Since they will never have to be updated, the performance penalty is negated.

    • Take a look at this article.

    0 讨论(0)
  • 2021-02-08 11:40

    In database:

    1. Denormalize it.
    2. Use more indexes where needed.
    3. Aggregate some data if you need it in your reports.

    In program:

    1. Use READ UNCOMMITTED isolation level.
    2. Use autocommits to escape long-run transactions.
    0 讨论(0)
提交回复
热议问题