Temporary tables bloating pg_attribute

十年热恋 提交于 2019-12-05 16:24:45

The best solution would be that you create your temporary tables at at session start with

CREATE TEMPORARY TABLE ... (
   ...
) ON COMMIT DELETE ROWS;

Then the temporary tables would be kept for the duration of the session but emptied at every commit.

This will reduce the bloat of pg_attribute considerable, and bloating shouldn't be a problem any more.

You could also join the dark side (be warned, this is unsupported):

  • Start PostgreSQL with

    pg_ctl start -o -O
    

    so that you can modify system catalogs.

  • Connect as superuser and run

    UPDATE pg_catalog.pg_class
    SET reloptions = ARRAY['autovacuum_vacuum_cost_delay=0']
    WHERE oid = 'pg_catalog.pg_attribute'::regclass;
    

Now autovacuum will run much more aggressively on pg_attribute, and that will probably take care of your problem.

Mind that the setting will be gone after a major upgrade.

I know this is an old question, but somebody might find my help useful here in the future.

So we're very heavy on temp tables having >500 rps and async i\o via nodejs and thus experienced a very heavy bloating of pg_attribute because of that. All you are left with is a very aggressive vacuuming which halts performance. All answers given here do not solve this, because droping and recreating temp table bloats pg_attribute heavily and therefore one sunny morning you will find db performance dead, and pg_attribute 200+ gb while your db would be like 10gb.

So the solution is elegantly this

create temp table if not exists my_temp_table (description) on commit delete rows;

So you go on playing with temp tables, save your pg_attribute, no dark side heavy vacuuming and get desired performance.

don't forget

vacuum full pg_depend;
vacuum full pg_attribute;

Cheers :)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!