The query is executing very slowly, is there any way to improve it any further?

后端 未结 8 1129
长情又很酷
长情又很酷 2021-02-15 17:26

I have the following query, and because of a lot of SUM function calls, my query is running too slow. I have a lot of records in my database and I would like to get

8条回答
  •  悲&欢浪女
    2021-02-15 17:54

    For optimizing such calculations you man consider pre-calculating some of the values. The idea of pre-calculations is to reduce the number of rows that need to be read or proceed.

    One way of achieving this is using an indexed view and leave the engine to do the calculations by itself. As this type of views have some limitations, you man end up creating a simple table and perform the calculations instead. Basically, it depends on the business needs.

    So, in the example below I am creating a table with RowID and RowDatetime columns and inserting 1 million rows. I am using an indexed view to count the entities per days, so instead of querying 1 million rows per year I will query 365 rows per year to count these metrics.

    DROP TABLE IF EXISTS [dbo].[DataSource];
    GO
    
    CREATE TABLE [dbo].[DataSource]
    (
        [RowID] BIGINT IDENTITY(1,1) PRIMARY KEY
       ,[RowDateTime] DATETIME2
    );
    
    GO
    
    DROP VIEW IF EXISTS [dbo].[vw_DataSource];
    GO
    
    CREATE VIEW [dbo].[vw_DataSource] WITH SCHEMABINDING
    AS
    SELECT YEAR([RowDateTime]) AS [Year]
          ,MONTH([RowDateTime]) AS [Month]
          ,DAY([RowDateTime]) AS [Day]
          ,COUNT_BIG(*) AS [Count]
    FROM [dbo].[DataSource]
    GROUP BY YEAR([RowDateTime])
            ,MONTH([RowDateTime])
            ,DAY([RowDateTime]);
    GO
    
    CREATE UNIQUE CLUSTERED INDEX [IX_vw_DataSource] ON [dbo].[vw_DataSource]
    (
        [Year] ASC,
        [Month] ASC,
        [Day] ASC
    );
    
    GO
    
    DECLARE @min bigint, @max bigint
    SELECT @Min=1 ,@Max=1000000
    
    INSERT INTO [dbo].[DataSource] ([RowDateTime])
    SELECT TOP (@Max-@Min+1) DATEFROMPARTS(2019,  1.0 + floor(12 * RAND(convert(varbinary, newid()))), 1.0 + floor(28 * RAND(convert(varbinary, newid())))          )       
    FROM master..spt_values t1 
    CROSS JOIN master..spt_values t2
    
    GO
    
    
    SELECT *
    FROM [dbo].[vw_DataSource]
    
    
    SELECT SUM(CASE WHEN DATEFROMPARTS([Year], [Month], [Day]) >= DATEADD(MONTH,-1,GETDATE()) THEN [Count] ELSE 0 END) as [Current - Last 30 Days Col1]
          ,SUM(CASE WHEN DATEFROMPARTS([Year], [Month], [Day]) >= DATEADD(QUARTER,-1,GETDATE()) THEN [Count] ELSE 0 END) as [Current - Last 90 Days Col1]
          ,SUM(CASE WHEN DATEFROMPARTS([Year], [Month], [Day]) >= DATEADD(YEAR,-1,GETDATE()) THEN [Count] ELSE 0 END) as [Current - Last 365 Days Col1]
    FROM [dbo].[vw_DataSource];
    

    The success of such solution depends very much on how the data is distributed and how many rows you have. For example, if you have one entry per day for each day of the year, the view and the table will have same match of rows, so the I/O operations will not be reduced.

    Also, the above is just an example of materializing the data and reading it. In your case you may need to add more columns the view definition.

提交回复
热议问题