Share_buffer_size in postgresql

丶灬走出姿态 提交于 2019-12-13 07:02:23

问题


I have a Postgres 9.3 DB on RHEl 6.4. I am getting DB connection time out from a server which is on RHel6.4.

The following data is SAR data when this issue occurred.

00:00:01        CPU      %usr     %nice      %sys   %iowait    %steal      %irq     %soft    %guest     %idle
02:10:01        all      0.05      0.00      0.29      3.06      0.00      0.00      0.05      0.00     96.55
02:20:01        all      0.07      0.00      0.28      3.84      0.00      0.00      0.05      0.00     95.75

00:00:01    kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit

02:10:01       781108  65150968     98.82    151576  60250076   5905400      7.17
02:20:01       245712  65686364     99.63    151664  60778552   5905140      7.17

The value of “memused” seems to be high, but this value does not include the value of “shared buffer”. (“kbcached” include the “shared buffer cache memory”.)

Currently, data which is to be exported to server via “shared buffer” of db at once. This data size is huge. As the result, db_timeout is occurring.

Shared buffer: memory used when exporting data

Please suggest.

  1. Do I need to increase the size of shared buffer.
  2. Is it possible that I can divide my data in shared buffer which is to be sent to server.

I analyzed query from db function.

kddi=# EXPLAIN (BUFFERS,ANALYZE)
select *
from table, user_data
where user_data.customer_id = charge_history.customer_id
   and charge_history.updated_date::date = (CURRENT_DATE - integer '1')
   and charge_history.picked_status = 'NOTPICKED';

                                                                         QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop  (cost=0.85..10873.44 rows=75 width=271) (actual time=0.123..51.515 rows=3982 loops=1)
   Buffers: shared hit=18475 read=55682
   ->  Index Scan using idx_chrghist_picked_status on charge_history  (cost=0.42..10239.13 rows=75 width=255) (actual time=0.092..16.022 rows=3982 loops=1)
         Index Cond: (picked_status = 'NOTPICKED'::text)
         Filter: ((updated_date)::date = (('now'::cstring)::date - 1))
         Rows Removed by Filter: 10022
         Buffers: shared hit=2547 read=55682
   ->  Index Scan using "CUSTOMERID" on subscriber_data  (cost=0.43..8.45 rows=1 width=36) (actual time=0.008..0.008 rows=1 loops=3982)
         Index Cond: ((customer_id)::text = (charge_history.customer_id)::text)
         Buffers: shared hit=15928
Total runtime: 52.053 ms

shared_buffers setting in DB is 1GB

Can I do something to improve my query.


回答1:


I suspect that the following index would speed up things by a factor 3 or more:

CREATE INDEX ON charge_history(picked_status, (updated_date::date));

But you can only create that index if updated_date is a date or a timestamp without time zone, because casting from timestamp with time zone to date is not immutable (it depends on the setting of TimeZone).

If that is a problem, you could change the query to something like:

... AND CAST(charge_history.updated_date AT TIME ZONE 'UTC' AS date) = ...

Then that expression can be indexed, because it is immutable.

The other problem is that the optimizer underestimates how many rows in charge_history will be matched. The cause could well be that recent rows tend to have picked_status = 'NOTPICKED'. Maybe the solution is to calculate statistics for that table more often.

You might want to experiment with either reducing autovacuum_analyze_scale_factor for that table or setting it to 100 and setting a reasonably high autovacuum_analyze_threshold.
This can be done with an SQL statement like this:

ALTER TABLE charge_history SET (
   autovacuum_analyze_scale_factor = 100,
   autovacuum_analyze_threshold = 100000
);

This example statement would cause new statistics to be calculated whenever 100000 rows have been modified.



来源:https://stackoverflow.com/questions/42064412/share-buffer-size-in-postgresql

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!