postgresql-9.6

Postgresql | No space left on device

时光毁灭记忆、已成空白 提交于 2019-12-10 17:37:50
问题 I am getting space issue while running a batch process on PostgreSQL database. However, df -h command shows that machine has enough space below is the exact error org.springframework.dao.DataAccessResourceFailureException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; ERROR: could not extend file "base/16388/16452": No space left on device Hint: Check free disk space. What is causing this issue? EDIT postgres

Return multiple columns and rows from a function PostgreSQL instead of record

余生颓废 提交于 2019-12-10 15:17:57
问题 I was reading online about function on PostgreSQL and returns results In this links: SQL function return-type: TABLE vs SETOF records How do I reference named parameters in Postgres sql functions? http://www.postgresqltutorial.com/plpgsql-function-returns-a-table/ I have written this Function: create or replace function brand_hierarchy(account_value int) RETURNS table (topID INTEGER, accountId INTEGER, liveRowCount bigint,archiveRowCount bigint) AS $BODY$ SELECT * FROM my_client_numbers where

WAL archive: FAILED (please make sure WAL shipping is setup)

為{幸葍}努か 提交于 2019-12-08 04:03:28
问题 I am trying to configure Barman to backup. When I do a barman check replica I keep getting: Server replica: WAL archive: FAILED (please make sure WAL shipping is setup) PostgreSQL: OK superuser: OK wal_level: OK directories: OK retention policy settings: OK backup maximum age: FAILED (interval provided: 1 day, latest backup age: No available backups) compression settings: OK failed backups: OK (there are 0 failed backups) minimum redundancy requirements: FAILED (have 0 backups, expected at

jsonb LIKE query on nested objects in an array

孤街浪徒 提交于 2019-12-07 20:16:03
问题 My JSON data looks like this: [{ "id": 1, "payload": { "location": "NY", "details": [{ "name": "cafe", "cuisine": "mexican" }, { "name": "foody", "cuisine": "italian" } ] } }, { "id": 2, "payload": { "location": "NY", "details": [{ "name": "mbar", "cuisine": "mexican" }, { "name": "fdy", "cuisine": "italian" } ] } }] given a text "foo" I want to return all the tuples that have this substring. But I cannot figure out how to write the query for the same. I followed this related answer but

Error on ALTER TYPE in postgres relation does not exist

独自空忆成欢 提交于 2019-12-07 18:14:29
问题 Using the following: CREATE TYPE user_types AS ENUM ('it', 'accounting', 'processes'); CREATE TABLE my_users ( my_user_id integer NOT NULL, my_user_name text NOT NULL, my_user_type user_types ) I want to change one of the user types: ALTER TYPE user_types RENAME ATTRIBUTE it TO softwaredev CASCADE; I get a error: ERROR: relation "user_types" does not exist SQL state: 42P01 I tried adding quotes and backticks but that didn't help. The example I wrote down here is not the exact code, my type

Temporary tables bloating pg_attribute

此生再无相见时 提交于 2019-12-07 08:30:46
问题 I'm using COPY to insert large batches of data into our database from CSVs. The insert looks something like this: -- This tmp table will contain all the items that we want to try to insert CREATE TEMP TABLE tmp_items ( field1 INTEGER NULL, field2 INTEGER NULL, ... ) ON COMMIT DROP; COPY tmp_items( field1, field2, ... ) FROM 'path\to\data.csv' WITH (FORMAT csv); -- Start inserting some items WITH newitems AS ( INSERT INTO items (field1, field2) SELECT tmpi.field1, tmpi,field2 FROM tmp_items

Parallel queries on CTE for writing operations in PostgreSQL

久未见 提交于 2019-12-07 06:18:12
问题 From PostgreSQL 9.6 Release Notes: Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized. My question is: If a CTE ( WITH clause) contains only read operations, but its results is used to feed a writing operation, like an insert or update, is it also disallowed to parallelize sequential scans? I mean, as CTE is much like a temporary table which only exists for currently executing query , can I suppose that its inner query can take

jsonb LIKE query on nested objects in an array

感情迁移 提交于 2019-12-06 09:36:04
My JSON data looks like this: [{ "id": 1, "payload": { "location": "NY", "details": [{ "name": "cafe", "cuisine": "mexican" }, { "name": "foody", "cuisine": "italian" } ] } }, { "id": 2, "payload": { "location": "NY", "details": [{ "name": "mbar", "cuisine": "mexican" }, { "name": "fdy", "cuisine": "italian" } ] } }] given a text "foo" I want to return all the tuples that have this substring. But I cannot figure out how to write the query for the same. I followed this related answer but cannot figure out how to do LIKE . This is what I have working right now: SELECT r.res->>'name' AS feature

Error on ALTER TYPE in postgres relation does not exist

此生再无相见时 提交于 2019-12-05 23:48:57
Using the following: CREATE TYPE user_types AS ENUM ('it', 'accounting', 'processes'); CREATE TABLE my_users ( my_user_id integer NOT NULL, my_user_name text NOT NULL, my_user_type user_types ) I want to change one of the user types: ALTER TYPE user_types RENAME ATTRIBUTE it TO softwaredev CASCADE; I get a error: ERROR: relation "user_types" does not exist SQL state: 42P01 I tried adding quotes and backticks but that didn't help. The example I wrote down here is not the exact code, my type has 31 characters, but I don't think the length of my type is the issue. I'm using postgres version 9.6.2

Temporary tables bloating pg_attribute

十年热恋 提交于 2019-12-05 16:24:45
I'm using COPY to insert large batches of data into our database from CSVs. The insert looks something like this: -- This tmp table will contain all the items that we want to try to insert CREATE TEMP TABLE tmp_items ( field1 INTEGER NULL, field2 INTEGER NULL, ... ) ON COMMIT DROP; COPY tmp_items( field1, field2, ... ) FROM 'path\to\data.csv' WITH (FORMAT csv); -- Start inserting some items WITH newitems AS ( INSERT INTO items (field1, field2) SELECT tmpi.field1, tmpi,field2 FROM tmp_items tmpi WHERE some condition -- Return the new id and other fields to the next step RETURNING id AS newid,