How do you find the row count for all your tables in Postgres

前端 未结 15 2068
无人及你
无人及你 2020-11-22 12:31

I\'m looking for a way to find the row count for all my tables in Postgres. I know I can do this one table at a time with:

SELECT count(*) FROM table_name;
         


        
相关标签:
15条回答
  • 2020-11-22 13:22

    You Can use this query to generate all tablenames with their counts

    select ' select  '''|| tablename  ||''', count(*) from ' || tablename ||' 
    union' from pg_tables where schemaname='public'; 
    

    the result from the above query will be

    select  'dim_date', count(*) from dim_date union 
    select  'dim_store', count(*) from dim_store union
    select  'dim_product', count(*) from dim_product union
    select  'dim_employee', count(*) from dim_employee union
    

    You'll need to remove the last union and add the semicolon at the end !!

    select  'dim_date', count(*) from dim_date union 
    select  'dim_store', count(*) from dim_store union
    select  'dim_product', count(*) from dim_product union
    select  'dim_employee', count(*) from dim_employee  **;**
    

    RUN !!!

    0 讨论(0)
  • 2020-11-22 13:24

    To get estimates, see Greg Smith's answer.

    To get exact counts, the other answers so far are plagued with some issues, some of them serious (see below). Here's a version that's hopefully better:

    CREATE FUNCTION rowcount_all(schema_name text default 'public')
      RETURNS table(table_name text, cnt bigint) as
    $$
    declare
     table_name text;
    begin
      for table_name in SELECT c.relname FROM pg_class c
        JOIN pg_namespace s ON (c.relnamespace=s.oid)
        WHERE c.relkind = 'r' AND s.nspname=schema_name
      LOOP
        RETURN QUERY EXECUTE format('select cast(%L as text),count(*) from %I.%I',
           table_name, schema_name, table_name);
      END LOOP;
    end
    $$ language plpgsql;
    

    It takes a schema name as parameter, or public if no parameter is given.

    To work with a specific list of schemas or a list coming from a query without modifying the function, it can be called from within a query like this:

    WITH rc(schema_name,tbl) AS (
      select s.n,rowcount_all(s.n) from (values ('schema1'),('schema2')) as s(n)
    )
    SELECT schema_name,(tbl).* FROM rc;
    

    This produces a 3-columns output with the schema, the table and the rows count.

    Now here are some issues in the other answers that this function avoids:

    • Table and schema names shouldn't be injected into executable SQL without being quoted, either with quote_ident or with the more modern format() function with its %I format string. Otherwise some malicious person may name their table tablename;DROP TABLE other_table which is perfectly valid as a table name.

    • Even without the SQL injection and funny characters problems, table name may exist in variants differing by case. If a table is named ABCD and another one abcd, the SELECT count(*) FROM... must use a quoted name otherwise it will skip ABCD and count abcd twice. The %I of format does this automatically.

    • information_schema.tables lists custom composite types in addition to tables, even when table_type is 'BASE TABLE' (!). As a consequence, we can't iterate oninformation_schema.tables, otherwise we risk having select count(*) from name_of_composite_type and that would fail. OTOH pg_class where relkind='r' should always work fine.

    • The type of COUNT() is bigint, not int. Tables with more than 2.15 billion rows may exist (running a count(*) on them is a bad idea, though).

    • A permanent type need not to be created for a function to return a resultset with several columns. RETURNS TABLE(definition...) is a better alternative.

    0 讨论(0)
  • 2020-11-22 13:25

    There's three ways to get this sort of count, each with their own tradeoffs.

    If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.

    The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:

    SELECT schemaname,relname,n_live_tup 
      FROM pg_stat_user_tables 
      ORDER BY n_live_tup DESC;
    

    That can also show you how many rows are dead, which is itself an interesting number to monitor.

    The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:

    SELECT 
      nspname AS schemaname,relname,reltuples
    FROM pg_class C
    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
    WHERE 
      nspname NOT IN ('pg_catalog', 'information_schema') AND
      relkind='r' 
    ORDER BY reltuples DESC;
    

    Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.

    0 讨论(0)
提交回复
热议问题