The query is basically:
SELECT DISTINCT \"my_table\".\"foo\" from \"my_table\" WHERE...
Pretending that I\'m 100% certain the DISTINC
Oftentimes, you can make such queries run faster by working around the distinct
by using a group by
instead:
select my_table.foo
from my_table
where [whatever where conditions you want]
group by foo;
You can try increasing the work_mem setting, depending on the size of Your dataset It can cause switching the query plan to hash aggregates, which are usually faster.
But before setting it too high globally, first read up on it. You can easily blow up Your server, because the max_connections
setting acts as a multiplier to this number.
This means that if you were to set work_mem = 128MB
and you set max_connections = 100
(the default), you should have more than 12.8GB of RAM. You're essentially telling the server that it can use that much for performing queries (not even considering any other memory use by Postgres or otherwise).
Your DISTINCT is causing it to sort the output rows in order to find duplicates. If you put an index on the column(s) selected by the query, the database may be able to read them out in index order and save the sort step. A lot will depend on the details of the query and the tables involved-- your saying you "know the problem is with the DISTINCT" really limits the scope of available answers.