Mysql Optimization suggestion for large table

后端 未结 2 785
天命终不由人
天命终不由人 2021-01-29 09:18

i want to optimize this query,

select  location_id, dept_id,
        round(sum(sales),0), sum(qty),
        count(distinct tran_id),
        now()
    from  tra         


        
相关标签:
2条回答
  • 2021-01-29 09:44

    None of the suggestions so far will help much, because...

    • Covering index: That is only slightly smaller than the table, so it is slightly faster.
    • KEY(tran_date) -- a waste; it is better to use the PK, which starts with tran_date.
    • PARTITIONing -- No. That is likely to be slower.
    • Removing tran_date (or otherwise rearranging the PK) -- This will hurt. The filtering (WHERE) is on tran_date; it is usually best to have that first.
    • So, why was COUNT(*) fast? Well, start by looking at the EXPLAIN. It will show that it used KEY(tran_date) instead of scanning the table. Less data to scan, hence faster.

    The real issue is that you have millions of rows to scan, it takes time to touch millions of rows.

    How to speed it up? Create and maintain a Summary table . Then query that table (with thousands of rows) instead of the original table (millions of rows). Total count is SUM(counts); total sum is SUM(sums); average is SUM(sums)/SUM(counts), etc.

    0 讨论(0)
  • 2021-01-29 09:55

    For this query:

    select location_id, dept_id,
           round(sum(sales), 0), sum(qty), count(distinct tran_id),
           now()
    from tran_sales
    where tran_date <= '2016-12-24'
    group by location_id, dept_id;
    

    There is not much you can do. One attempt would be a covering index: (tran_date, location_id, dept_id, sales, qty), but I don't think that will help much.

    0 讨论(0)
提交回复
热议问题