What is the max JDBC batch size?

后端 未结 3 992
心在旅途
心在旅途 2021-02-04 06:59

I have a list and that list increasing continuously. I am doing add batch depend on the list size. I forgot to put limit for do executeBatch in specified size.

Program

3条回答
  •  余生分开走
    2021-02-04 07:27

    PgJDBC has some limitations regarding batches:

    • All request values, and all results, must be accumulated in memory. This includes large blob/clob results. So free memory is the main limiting factor for batch size.

    • Until PgJDBC 9.4 (not yet released), batches that return generated keys always do a round trip for every entry, so they're no better than individual statement executions.

    • Even in 9.4, batches that return generated keys only offer a benefit if the generated values are size limited. A single text, bytea or unconstrained varchar field in the requested result will force the driver to do a round trip for every execution.

    The benefit of batching is a reduction in network round trips. So there's much less point if your DB is local to your app server. There's a diminishing return with increasing batch size, because the total time taken in network waits falls off quickly, so it's often not work stressing about trying to make batches as big as possible.

    If you're bulk-loading data, seriously consider using the COPY API instead, via PgJDBC's CopyManager, obtained via the PgConnection interface. It lets you stream CSV-like data to the server for rapid bulk-loading with very few client/server round trips. Unfortunately, it's remarkably under-documented - it doesn't appear in the main PgJDBC docs at all, only in the API docs.

提交回复
热议问题