postgresql-performance

Why is PostgreSQL not using my indexes on a small table?

眉间皱痕 提交于 2019-11-26 18:36:22
问题 I have the following table in PostgreSQL: CREATE TABLE index_test ( id int PRIMARY KEY NOT NULL, text varchar(2048) NOT NULL, last_modified timestamp NOT NULL, value int, item_type varchar(2046) ); CREATE INDEX idx_index_type ON index_test ( item_type ); CREATE INDEX idx_index_value ON index_test ( value ) I make the following selects: explain select * from index_test r where r.item_type='B'; explain select r.value from index_test r where r.value=56; The explanation of execution plan looks

Configuration parameter work_mem in PostgreSQL on Linux

[亡魂溺海] 提交于 2019-11-26 18:18:32
问题 I have to optimize queries by tuning basic PostgreSQL server configuration parameters. In documentation I've came across the work_mem parameter. Then I checked how changing this parameter would influence performance of my query (using sort). I measured query execution time with various work_mem settings and was very disappointed. The table on which I perform my query contains 10,000,000 rows and there are 430 MB of data to sort. ( Sort Method: external merge Disk: 430112kB ). With work_mem =

Running PostgreSQL in memory only

淺唱寂寞╮ 提交于 2019-11-26 17:19:07
I want to run a small PostgreSQL database which runs in memory only, for each unit test I write. For instance: @Before void setUp() { String port = runPostgresOnRandomPort(); connectTo("postgres://localhost:"+port+"/in_memory_db"); // ... } Ideally I'll have a single postgres executable checked into the version control, which the unit test will use. Something like HSQL , but for postgres. How can I do that? Were can I get such a Postgres version? How can I instruct it not to use the disk? This is not possible with Postgres. It does not offer an in-process/in-memory engine like HSQLDB or MySQL.

Multicolumn index on 3 fields with heterogenous data types

安稳与你 提交于 2019-11-26 12:45:10
问题 I have a postgres table with 3 fields: a : postgis geometry b : array varchar[] c : integer and I have a query that involves all of them. I would like to add a multicolumn index to speed it up but I cannot as the 3 fields cannot go under the same index because of their nature. What is the strategy in this case? Adding 3 indexes gist, gin and btree and postgres will use them all during the query? 回答1: Single-column index First of all, Postgres can combine multiple indexes very efficiently in a

Optimize Postgres timestamp query range

穿精又带淫゛_ 提交于 2019-11-26 12:29:49
问题 I have the following table and indices defined: CREATE TABLE ticket ( wid bigint NOT NULL DEFAULT nextval(\'tickets_id_seq\'::regclass), eid bigint, created timestamp with time zone NOT NULL DEFAULT now(), status integer NOT NULL DEFAULT 0, argsxml text, moduleid character varying(255), source_id bigint, file_type_id bigint, file_name character varying(255), status_reason character varying(255), ... ) I created an index on the created timestamp as follows: CREATE INDEX ticket_1_idx ON ticket

Best way to delete millions of rows by ID

被刻印的时光 ゝ 提交于 2019-11-26 12:22:09
问题 I need to delete about 2 million rows from my PG database. I have a list of IDs that I need to delete. However, any way I try to do this is taking days. I tried putting them in a table and doing it in batches of 100. 4 days later, this is still running with only 297268 rows deleted. (I had to select 100 id\'s from an ID table, delete where IN that list, delete from ids table the 100 I selected). I tried: DELETE FROM tbl WHERE id IN (select * from ids) That\'s taking forever, too. Hard to

Way to try multiple SELECTs till a result is available?

送分小仙女□ 提交于 2019-11-26 07:50:11
问题 What if I want to search for a single row in a table with a decrementing precision, e.g. like this: SELECT * FROM image WHERE name LIKE \'text\' AND group_id = 10 LIMIT 1 When this gives me no result, try this one: SELECT * FROM image WHERE name LIKE \'text\' LIMIT 1 And when this gives me no result, try this one: SELECT * FROM image WHERE group_id = 10 LIMIT 1 Is it possible to do that with just one expression? Also there arises a problem when I have not two but e.g. three or more search

Finding similar strings with PostgreSQL quickly

血红的双手。 提交于 2019-11-26 06:31:26
问题 I need to create a ranking of similar strings in a table. I have the following table create table names ( name character varying(255) ); Currently, I\'m using pg_trgm module which offers the similarity function, but I have an efficiency problem. I created an index like the Postgres manual suggests: CREATE INDEX trgm_idx ON names USING gist (name gist_trgm_ops); and I\'m executing the following query: select (similarity(n1.name, n2.name)) as sim, n1.name, n2.name from names n1, names n2 where

Any downsides of using data type “text” for storing strings?

半腔热情 提交于 2019-11-26 01:27:55
问题 As per Postgres documentation, they support 3 data-types for character data: character varying(n), varchar(n) variable-length with limit character(n), char(n) fixed-length, blank padded text variable unlimited length In my application, I came across few unpleasant scenarios where insert/update queries failed as the desired text to be inserted exceeded the varchar(n) or char(n) limit. For such cases, changing the data type of such columns to text sufficed. My questions are: If we generalize

Keep PostgreSQL from sometimes choosing a bad query plan

随声附和 提交于 2019-11-26 00:18:29
问题 I have a strange problem with PostgreSQL performance for a query, using PostgreSQL 8.4.9. This query is selecting a set of points within a 3D volume, using a LEFT OUTER JOIN to add a related ID column where that related ID exists. Small changes in the x range can cause PostgreSQL to choose a different query plan, which takes the execution time from 0.01 seconds to 50 seconds. This is the query in question: SELECT treenode.id AS id, treenode.parent_id AS parentid, (treenode.location).x AS x,