In Postgres, you can specify an IN clause, like this:
SELECT * FROM user WHERE id IN (1000, 1001, 1002)
Does anyone know what\'s the maximu
According to the source code located here, starting at line 850, PostgreSQL doesn't explicitly limit the number of arguments.
The following is a code comment from line 870:
/*
* We try to generate a ScalarArrayOpExpr from IN/NOT IN, but this is only
* possible if the inputs are all scalars (no RowExprs) and there is a
* suitable array type available. If not, we fall back to a boolean
* condition tree with multiple copies of the lefthand expression.
* Also, any IN-list items that contain Vars are handled as separate
* boolean conditions, because that gives the planner more scope for
* optimization on such clauses.
*
* First step: transform all the inputs, and detect whether any are
* RowExprs or contain Vars.
*/
You might want to consider refactoring that query instead of adding an arbitrarily long list of ids... You could use a range if the ids indeed follow the pattern in your example:
SELECT * FROM user WHERE id >= minValue AND id <= maxValue;
Another option is to add an inner select:
SELECT *
FROM user
WHERE id IN (
SELECT userId
FROM ForumThreads ft
WHERE ft.id = X
);
If you have query like:
SELECT * FROM user WHERE id IN (1, 2, 3, 4 -- and thousands of another keys)
you may increase performace if rewrite your query like:
SELECT * FROM user WHERE id = ANY(VALUES (1), (2), (3), (4) -- and thousands of another keys)
There is no limit to the number of elements that you are passing to IN clause. If there are more elements it will consider it as array and then for each scan in the database it will check if it is contained in the array or not. This approach is not so scalable. Instead of using IN clause try using INNER JOIN with temp table. Refer http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/ for more info. Using INNER JOIN scales well as query optimizer can make use of hash join and other optimization. Whereas with IN clause there is no way for the optimizer to optimize the query. I have noticed speedup of at least 2x with this change.
This is not really an answer to the present question, however it might help others too.
At least I can tell there is a technical limit of 32767 values (=Short.MAX_VALUE) passable to the PostgreSQL backend, using Posgresql's JDBC driver 9.1.
This is a test of "delete from x where id in (... 100k values...)" with the postgresql jdbc driver:
Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 100000
at org.postgresql.core.PGStream.SendInteger2(PGStream.java:201)
Just tried it. the answer is -> out-of-range integer as a 2-byte value: 32768