Logging slow queries on Google Cloud SQL PostgreSQL instances

前端 未结 4 817
死守一世寂寞
死守一世寂寞 2021-02-13 10:40

The company I work for uses Google Cloud SQL to manage their SQL databases in production.

We\'re having performance issues and I thought it\'d be a good idea (among othe

相关标签:
4条回答
  • 2021-02-13 11:04

    Not ideal by any measure, but what we do is run something like this on a cron once a minute and log out the result:

    SELECT EXTRACT(EPOCH FROM now() - query_start) AS seconds, query
     FROM  pg_stat_activity 
     WHERE state = 'active' AND now() - query_start > interval '1 seconds' AND query NOT LIKE '%pg_stat_activity%'
     ORDER BY seconds DESC LIMIT 20
    

    You'd need to fiddle with the query to get millisecond granularity, and even then it'll only catch queries that overlap with your cron frequency, but probably better than nothing?

    0 讨论(0)
  • 2021-02-13 11:19

    The possibility of monitoring slow PostgreSQL queries for Cloud SQL instances is currently not available. As you comment, the log_min_duration_statement flag is currently not supported by Cloud SQL.

    Right now, work is being made on adding this feature to Cloud SQL, and you can keep track on the progress made through this link. You can click on the star icon on the top left corner to get email notifications whenever any significant progress has been achieved.

    0 讨论(0)
  • 2021-02-13 11:19

    There is a way to log slow queries through the pg_stat_statements extension which is supported by Cloud SQL.

    Since Cloud SQL doesn't grant superuser right to any of the users you need to use some workaround. First, you need to enable the extension with

    CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
    

    then you can check slow queries with a query like

    SELECT pd.datname,
           us.usename,
           pss.userid,
           pss.query                         AS SQLQuery,
           pss.rows                          AS TotalRowCount,
           (pss.total_time / 1000)           AS TotalSecond,
           ((pss.total_time / 1000) / calls) as TotalAverageSecond
    FROM pg_stat_statements AS pss
           INNER JOIN pg_database AS pd
                      ON pss.dbid = pd.oid
           INNER JOIN pg_user AS us
                      ON pss.userid = us.usesysid
    ORDER BY TotalAverageSecond DESC
    LIMIT 10;
    

    As postgres user you can have a look on all slow queries, but since the user is not superuser you will see <insufficient privilege> on all other users' queries. To get around this limitation you can install the extension on other databases too (normally only postgres user has rigths to install extensions) and you can check the query texts with the owner of the db.

    0 讨论(0)
  • 2021-02-13 11:24

    April 3, 2019 UPDATE

    It is now possible to log slow queries on Google Cloud SQL PostgreSQL instances, see https://cloud.google.com/sql/docs/release-notes#april_3_2019:

    database_flags = [
      {
        name = "log_min_duration_statement"
        value = "1000"
      },
    ]
    

    Once you enable log_min_duration_statement, you can view the logs using Stackdriver logging. Select Cloud SQL Database -> cloudsql.googleapis.com/postgres.log and you will see the log like this.

    [103402]: [9-1] db=cloudsqladmin,user=cloudsqladmin LOG: duration: 11.211 ms statement: [YOUR SQL HERE]
    

    References:

    • Full list of supported flags (CTRL+F for log_min_duration_statement): https://cloud.google.com/sql/docs/postgres/flags#postgres-l
    • Issue tracker: https://issuetracker.google.com/issues/74578509#comment54
    • PostgreSQL docs: https://www.postgresql.org/docs/9.6/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
    0 讨论(0)
提交回复
热议问题