Count(*) vs Count(1) - SQL Server

后端 未结 13 2073
醉梦人生
醉梦人生 2020-11-21 05:21

Just wondering if any of you people use Count(1) over Count(*) and if there is a noticeable difference in performance or if this is just a legacy h

相关标签:
13条回答
  • 2020-11-21 05:54

    I work on the SQL Server team and I can hopefully clarify a few points in this thread (I had not seen it previously, so I am sorry the engineering team has not done so previously).

    First, there is no semantic difference between select count(1) from table vs. select count(*) from table. They return the same results in all cases (and it is a bug if not). As noted in the other answers, select count(column) from table is semantically different and does not always return the same results as count(*).

    Second, with respect to performance, there are two aspects that would matter in SQL Server (and SQL Azure): compilation-time work and execution-time work. The Compilation time work is a trivially small amount of extra work in the current implementation. There is an expansion of the * to all columns in some cases followed by a reduction back to 1 column being output due to how some of the internal operations work in binding and optimization. I doubt it would show up in any measurable test, and it would likely get lost in the noise of all the other things that happen under the covers (such as auto-stats, xevent sessions, query store overhead, triggers, etc.). It is maybe a few thousand extra CPU instructions. So, count(1) does a tiny bit less work during compilation (which will usually happen once and the plan is cached across multiple subsequent executions). For execution time, assuming the plans are the same there should be no measurable difference. (One of the earlier examples shows a difference - it is most likely due to other factors on the machine if the plan is the same).

    As to how the plan can potentially be different. These are extremely unlikely to happen, but it is potentially possible in the architecture of the current optimizer. SQL Server's optimizer works as a search program (think: computer program playing chess searching through various alternatives for different parts of the query and costing out the alternatives to find the cheapest plan in reasonable time). This search has a few limits on how it operates to keep query compilation finishing in reasonable time. For queries beyond the most trivial, there are phases of the search and they deal with tranches of queries based on how costly the optimizer thinks the query is to potentially execute. There are 3 main search phases, and each phase can run more aggressive(expensive) heuristics trying to find a cheaper plan than any prior solution. Ultimately, there is a decision process at the end of each phase that tries to determine whether it should return the plan it found so far or should it keep searching. This process uses the total time taken so far vs. the estimated cost of the best plan found so far. So, on different machines with different speeds of CPUs it is possible (albeit rare) to get different plans due to timing out in an earlier phase with a plan vs. continuing into the next search phase. There are also a few similar scenarios related to timing out of the last phase and potentially running out of memory on very, very expensive queries that consume all the memory on the machine (not usually a problem on 64-bit but it was a larger concern back on 32-bit servers). Ultimately, if you get a different plan the performance at runtime would differ. I don't think it is remotely likely that the difference in compilation time would EVER lead to any of these conditions happening.

    Net-net: Please use whichever of the two you want as none of this matters in any practical form. (There are far, far larger factors that impact performance in SQL beyond this topic, honestly).

    I hope this helps. I did write a book chapter about how the optimizer works but I don't know if its appropriate to post it here (as I get tiny royalties from it still I believe). So, instead of posting that I'll post a link to a talk I gave at SQLBits in the UK about how the optimizer works at a high level so you can see the different main phases of the search in a bit more detail if you want to learn about that. Here's the video link: https://sqlbits.com/Sessions/Event6/inside_the_sql_server_query_optimizer

    0 讨论(0)
  • 2020-11-21 05:54

    I would expect the optimiser to ensure there is no real difference outside weird edge cases.

    As with anything, the only real way to tell is to measure your specific cases.

    That said, I've always used COUNT(*).

    0 讨论(0)
  • 2020-11-21 05:55

    I ran a quick test on SQL Server 2012 on an 8 GB RAM hyper-v box. You can see the results for yourself. I was not running any other windowed application apart from SQL Server Management Studio while running these tests.

    My table schema:

    CREATE TABLE [dbo].[employee](
        [Id] [bigint] IDENTITY(1,1) NOT NULL,
        [Name] [nvarchar](50) NOT NULL,
     CONSTRAINT [PK_employee] PRIMARY KEY CLUSTERED 
    (
        [Id] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    
    GO
    

    Total number of records in Employee table: 178090131 (~ 178 million rows)

    First Query:

    Set Statistics Time On
    Go    
    Select Count(*) From Employee
    Go    
    Set Statistics Time Off
    Go
    

    Result of First Query:

     SQL Server parse and compile time: 
     CPU time = 0 ms, elapsed time = 35 ms.
    
     (1 row(s) affected)
    
     SQL Server Execution Times:
       CPU time = 10766 ms,  elapsed time = 70265 ms.
     SQL Server parse and compile time: 
       CPU time = 0 ms, elapsed time = 0 ms.
    

    Second Query:

        Set Statistics Time On
        Go    
        Select Count(1) From Employee
        Go    
        Set Statistics Time Off
        Go
    

    Result of Second Query:

     SQL Server parse and compile time: 
       CPU time = 14 ms, elapsed time = 14 ms.
    
    (1 row(s) affected)
    
     SQL Server Execution Times:
       CPU time = 11031 ms,  elapsed time = 70182 ms.
     SQL Server parse and compile time: 
       CPU time = 0 ms, elapsed time = 0 ms.
    

    You can notice there is a difference of 83 (= 70265 - 70182) milliseconds which can easily be attributed to exact system condition at the time queries are run. Also I did a single run, so this difference will become more accurate if I do several runs and do some averaging. If for such a huge data-set the difference is coming less than 100 milliseconds, then we can easily conclude that the two queries do not have any performance difference exhibited by the SQL Server Engine.

    Note : RAM hits close to 100% usage in both the runs. I restarted SQL Server service before starting both the runs.

    0 讨论(0)
  • 2020-11-21 05:57

    As this question comes up again and again, here is one more answer. I hope to add something for beginners wondering about "best practice" here.

    SELECT COUNT(*) FROM something counts records which is an easy task.

    SELECT COUNT(1) FROM something retrieves a 1 per record and than counts the 1s that are not null, which is essentially counting records, only more complicated.

    Having said this: Good dbms notice that the second statement will result in the same count as the first statement and re-interprete it accordingly, as not to do unnecessary work. So usually both statements will result in the same execution plan and take the same amount of time.

    However from the point of readability you should use the first statement. You want to count records, so count records, not expressions. Use COUNT(expression) only when you want to count non-null occurences of something.

    0 讨论(0)
  • 2020-11-21 05:58

    In all RDBMS, the two ways of counting are equivalent in terms of what result they produce. Regarding performance, I have not observed any performance difference in SQL Server, but it may be worth pointing out that some RDBMS, e.g. PostgreSQL 11, have less optimal implementations for COUNT(1) as they check for the argument expression's nullability as can be seen in this post.

    I've found a 10% performance difference for 1M rows when running:

    -- Faster
    SELECT COUNT(*) FROM t;
    
    -- 10% slower
    SELECT COUNT(1) FROM t;
    
    0 讨论(0)
  • 2020-11-21 05:59

    COUNT(1) is not substantially different from COUNT(*), if at all. As to the question of COUNTing NULLable COLUMNs, this can be straightforward to demo the differences between COUNT(*) and COUNT(<some col>)--

    USE tempdb;
    GO
    
    IF OBJECT_ID( N'dbo.Blitzen', N'U') IS NOT NULL DROP TABLE dbo.Blitzen;
    GO
    
    CREATE TABLE dbo.Blitzen (ID INT NULL, Somelala CHAR(1) NULL);
    
    INSERT dbo.Blitzen SELECT 1, 'A';
    INSERT dbo.Blitzen SELECT NULL, NULL;
    INSERT dbo.Blitzen SELECT NULL, 'A';
    INSERT dbo.Blitzen SELECT 1, NULL;
    
    SELECT COUNT(*), COUNT(1), COUNT(ID), COUNT(Somelala) FROM dbo.Blitzen;
    GO
    
    DROP TABLE dbo.Blitzen;
    GO
    
    0 讨论(0)
提交回复
热议问题