What is considered a “large” table in SQL Server?

后端 未结 6 656
独厮守ぢ
独厮守ぢ 2021-02-01 04:47

I have a table with 10 million records in it. Is that considered a lot of records? Should I be worried about search times? If not, it will keep growing, so what is cons

6条回答
  •  有刺的猬
    2021-02-01 05:30

    Ditto other posters on how "large" depends what your data is, what kind of query you want to do, what your hardware is, and what your definition of a reason search time is.

    But here's one way to define "large": a "large" table is one that exceeds the amount of real memory the host can allocate to SQL Server. SQL Server is perfectly capable of working with tables that greatly exceed physical memory in size, but any time a query requires a table scan (i.e., reading every record) of such a table, you will get clobbered. Ideally you want to keep the entire table in memory; if that is not possible, you at least want to keep the necessary indexes in memory. If you have an index that supports your query and you can keep that index in RAM, performance will still scale pretty well.

    If it is not obvious to you as a designer what your clustered index (physical arrangement of data) and non-clustered indexes (pointers to the clustered index, essentially) should be, SQL Server comes with very good profiling tools that will help you define indexes in appropriate ways for your workload.

    Finally, consider throwing hardware at the problem. SQL Server performance is nearly always memory-bound rather than cpu-bound, so don't buy a fast 8-core machine and cripple it with 4 GB of physical memory. If you need reliably low latency from a 100 GB database, consider hosting it on a machine with 64 GB---or even 128 GB---of ram.

提交回复
热议问题