I\'m using the following code
st = connection.createStatement(
ResultSet.CONCUR_READ_ONLY,
ResultSet.FETCH_FORWARD,
Resu
The two queries do entirely different things.
Using the LIMIT
clause limits the size of the result set to 10000, while setting the fetch size does not, it instead gives a hint to the driver saying how many rows to fetch at a time when iterating through the result set - which includes all 800k rows.
So when using setFetchSize
, the database creates the full result set, that's why it's taking so long.
Edit for clarity: Setting the fetch size does nothing unless you iterate through the result (see Jon's comment), but creating a much smaller result set via LIMIT makes a great difference.
I think setFetchSize(...)
is in order to provide Pagenation
But in case you just want to limit the number of rows, use this instead:
st.setMaxRows(1000);
Try turning auto-commit off:
// make sure autocommit is off
connection.setAutoCommit(false);
st = connection.createStatement();
st.setFetchSize(1000);
System.out.println("start query ");
rs = st.executeQuery(queryString);
System.out.println("done query");
Reference
I noticed that your use of the API is different from what expressed by Javadoc:
Try passing parameters in this order
ResultSet.TYPE_FORWARD_ONLY,
ResultSet.CONCUR_READ_ONLY,
ResultSet.FETCH_FORWARD
This will depend on your driver. From the docs:
Gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed. The number of rows specified affects only result sets created using this statement. If the value specified is zero, then the hint is ignored. The default value is zero.
Note that it says "a hint" - I would take that to mean that a driver can ignore the hint if it really wants to... and it sounds like that's what's happening.