Batch Size in kafka jdbc sink connector

﹥>﹥吖頭↗ 提交于 2020-01-25 03:57:21

问题


I want to read only 5000 records in a batch through jdbc sink, for which I've used the batch.size in the jdbc sink config file:

name=jdbc-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
batch.size=5000
topics=postgres_users

connection.url=jdbc:postgresql://localhost:34771/postgres?user=foo&password=bar
file=test.sink.txt
auto.create=true

But the batch.size has no effect as records are getting inserted into the database when new records are inserted into the source database.

How can I achieve to insert in a batch of 5000?


回答1:


There is no direct solution to sink records in batches but we give try tune below property if it works. I have never tried but my understanding Kafka Sink Connector nothing but consumer to consume message fron topic.

max.poll.records: The maximum number of records returned in a single call to poll()

consumer.fetch.min.bytes: The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request

fetch.wait.max.ms: The broker will wait for this amount of time BEFORE sending a response to the consumer client, unless it has enough data to fill the response (fetch.message.max.bytes)

fetch.min.bytes: The broker will wait for this amount of data to fill BEFORE it sends the response to the consumer client.



来源:https://stackoverflow.com/questions/58552372/batch-size-in-kafka-jdbc-sink-connector

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!