Does jdbc dataset store all rows in jvm memory

后端 未结 2 1107
無奈伤痛
無奈伤痛 2021-01-22 22:21

I am using a java jdbc application to fetch about 500,000 records from DB. The Database being used is Oracle. I write the data into a file as soon as each row is fetched. Since

相关标签:
2条回答
  • 2021-01-22 22:54

    While increasing the fetch size may help the performance a bit I would also look into tuning the SDU size which controls the size of the packets at the sqlnet layer. Increasing the SDU size can speed up data transfers.

    Of course the time it takes to fetch these 500,000 rows largely depends on how much data you're fetching. If it takes an hour I'm guessing you're fetching a lot of data and/or you're doing it from a remote client over a WAN.

    To change the SDU size:

    First change the default SDU size on the server to 32k (starting in 11.2.0.3 you can even use 64kB and up to 2MB starting in 12c) by changing or adding this line in sqlnet.ora on the server: DEFAULT_SDU_SIZE=32767

    Then modify your JDBC URL: jdbc:oracle:thin:@(DESCRIPTION=(SDU=32767)(HOST=...)(PORT=...))(CONNECT_DATA=

    0 讨论(0)
  • 2021-01-22 23:09

    It depends. Different drivers may behave differently and different ResultSet settings may behave differently.

    If you have a CONCUR_READ_ONLY, FETCH_FORWARD, TYPE_FORWARD_ONLY ResultSet, the driver will almost certainly actively store in memory the number of rows that corresponds to your fetch size (of course data for earlier rows will remain in memory for some period of time until it is garbage collected). If you have a TYPE_SCROLL_INSENSITIVE ResultSet, on the other hand, it is very likely that the driver would store all the data that was fetched in memory in order to allow you to scroll backwards and forwards through the data. That's not the only possible way to implement this behavior, so different drivers (and different versions of drivers) may have different behaviors but it is the simplest and the way that most drivers I've come across behave.

    0 讨论(0)
提交回复
热议问题