pg_dump on Database throwing error 'out of shared memory'

会有一股神秘感。 提交于 2019-12-05 06:09:16
pgollangi

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>

You might need to increase max_locks_per_transaction even further. Check the documentation for details on what this parameter means. If in doubt, check how many tables you have and set max_locks_per_transaction to at least that much, then you should be OK.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!