I get an error “could not write block … of temporary file no space left on device …” using postgresql

前端 未结 3 1245
闹比i
闹比i 2021-02-19 04:35

I\'m running a really big query, that insert a lot of rows in table, almost 8 million of rows divide in some smaller querys, but in some moment appear that error : \"I get an er

相关标签:
3条回答
  • 2021-02-19 05:15

    The error is quite self-explanatory. You are running a big query yet you do not have enough disk space to do so. If postgresql is installed in /opt...check if you have enough space to run the query. If not LIMIT the output to confirm you are getting the expected output and then proceed to run the query and write the output to a file.

    0 讨论(0)
  • 2021-02-19 05:16

    Inserting data or index(create) always needs temp_tablespaces, which determines the placement of temporary tables and indexes, as well as temporary files that are used for purposes such as sorting large data sets.according to your error, it meant that your temp_tablespace location is not enough for disk space .

    To resolve this problem you may need these two ways:
    1.Re-claim the space of your temp_tablespace located to, default /PG_DATA/base/pgsql_tmp

    2. If your temp_tablespace space still not enough for temp storing you can create the other temp tablespace for that database:

    create tablespace tmp_YOURS location '[your enough space location';
    alter database yourDB set temp_tablespaces = tmp_YOURS ;
    GRANT ALL ON TABLESPACE tmp_YOURS to USER_OF_DB;
    

    then disconnect the session and reconnect it.

    0 讨论(0)
  • 2021-02-19 05:27

    OK. As there are still some facts missing, an attempt to answer to maybe clarify the issue:

    It appears that you are running out of disk space. Most likely because you don't have enough space on your disk. Check on a Linux/Unix df -h for example.

    To show you, how this could happen: Having a table with maybe 3 integers the data alone will occupy about 12Byte. You need to add some overhead to it for row management etc. On another answer Erwin mentioned about 23Byte and linked to the manual for more information about. Also there might needs some padding betweens rows etc. So doing a little math:

    Even with a 3 integer we will end up at about 40 Byte per row. Having in mind you wanted to insert 8,000,000 this will sum up to 320,000,000Byte or ~ 300MB (for our 3 integer example only and very roughly).

    Now giving, you have a couple of indexes on this table, the indexes will also grow during the inserts. Also another aspect might could be bloat on the table and indexes which might can be cleared with a vacuum.

    So what's the solution:

    1. Provide more disk space to your database
    2. Split your inserts a little more and ensure, vacuum is running between them
    0 讨论(0)
提交回复
热议问题