kafka-producer-api

Spring Batch : One Reader, composite processor (two classes with different entities) and two kafkaItemWriter

不打扰是莪最后的温柔 提交于 2021-02-11 14:51:38
问题 ItemReader is reading data from DB2 and gave java object ClaimDto . Now the ClaimProcessor takes in the object of ClaimDto and return CompositeClaimRecord object which comprises of claimRecord1 and claimRecord2 which to be sent to two different Kafka topics. How to write claimRecord1 and claimRecord2 to topic1 and topic2 respectively. 回答1: Just write a custom ItemWriter that does exactly that. public class YourItemWriter implements ItemWriter<CompositeClaimRecord>` { private final ItemWriter

Spring Batch : One Reader, composite processor (two classes with different entities) and two kafkaItemWriter

笑着哭i 提交于 2021-02-11 14:51:05
问题 ItemReader is reading data from DB2 and gave java object ClaimDto . Now the ClaimProcessor takes in the object of ClaimDto and return CompositeClaimRecord object which comprises of claimRecord1 and claimRecord2 which to be sent to two different Kafka topics. How to write claimRecord1 and claimRecord2 to topic1 and topic2 respectively. 回答1: Just write a custom ItemWriter that does exactly that. public class YourItemWriter implements ItemWriter<CompositeClaimRecord>` { private final ItemWriter

Stream CSV data in Kafka-Python

落爺英雄遲暮 提交于 2021-02-11 12:14:11
问题 Am sending the CSV data to Kafka topic using Kafka-Python . Data is sent and received by Consumer successfully. Now am trying to stream a csv file continuously, any new entry added to the file should be automatically sent to Kafka topic. Any suggestion would be helpful on continuous streaming of CSV file Below is my existing code, from kafka import KafkaProducer import logging from json import dumps, loads import csv logging.basicConfig(level=logging.INFO) producer = KafkaProducer(bootstrap

Understanding Kafka poll(), flush() & commit()

我与影子孤独终老i 提交于 2021-02-08 03:28:25
问题 I’m new to Kafka and trying out few small usecase for my new application. The use case is basically, Kafka-producer —> Kafka-Consumer—> flume-Kafka source—>flume-hdfs-sink. When Consuming(step2), below is the sequence of steps.. 1. consumer.Poll(1.0) 1.a. Produce to multiple topics (multiple flume agents are listening) 1.b. Produce. Poll() 2. Flush() every 25 msgs 3. Commit() every msgs (asynchCommit=false) Question 1: Is this sequence of action right!?! Question2: Will this cause any data

Why is there inconsistency in Kafka's ordering guarantees when using Idempotent Producer?

妖精的绣舞 提交于 2021-02-06 11:27:26
问题 I am using Kafka 1.0.1 in my application and I have started using the Idempotent Producer feature that was introduced in 0.11, and I've having trouble understanding the ordering guarantees when using the Idempontent feature. My producer's configuration is: enable.idempotence = true max.in.flight.requests.per.connection = 5 retries = 50 acks = all According to the documentation: retries Setting a value greater than zero will cause the client to resend any record whose send fails with a

Why is there inconsistency in Kafka's ordering guarantees when using Idempotent Producer?

心已入冬 提交于 2021-02-06 11:24:50
问题 I am using Kafka 1.0.1 in my application and I have started using the Idempotent Producer feature that was introduced in 0.11, and I've having trouble understanding the ordering guarantees when using the Idempontent feature. My producer's configuration is: enable.idempotence = true max.in.flight.requests.per.connection = 5 retries = 50 acks = all According to the documentation: retries Setting a value greater than zero will cause the client to resend any record whose send fails with a

What is the difference between Kafka partitions and Kafka replicas?

时光毁灭记忆、已成空白 提交于 2021-02-05 09:21:46
问题 I created 3 Kafka brokers setup with broker id's 20,21,22. Then I created this topic: bin/kafka-topics.sh --zookeeper localhost:2181 \ --create --topic zeta --partitions 4 --replication-factor 3 which resulted in: When a producer sends message "hello world" to topic zeta, to which partition the message first gets written to by Kafka? The "hello world" message gets replicated in all 4 partitions? Each broker among the 3 brokers contain all the 4 partitions? How is that related to replica

Whole cluster failing if one kafka node goes down?

两盒软妹~` 提交于 2021-01-29 22:54:54
问题 I have 3 node kafka cluster each having zookeeper and kafka. If i explicitly kill the leader node both zookeeper and kafka the whole cluster is not accepting any incoming data and waiting for the node to come back. kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 3 min.insync.replicas=2 --partitions 6 --topic logs topic created using the above command. Node 1 server.properties broker.id=0 listeners=PLAINTEXT://:9092 advertised.listeners=PLAINTEXT://10.0.2.4:9092

Whole cluster failing if one kafka node goes down?

▼魔方 西西 提交于 2021-01-29 22:33:36
问题 I have 3 node kafka cluster each having zookeeper and kafka. If i explicitly kill the leader node both zookeeper and kafka the whole cluster is not accepting any incoming data and waiting for the node to come back. kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 3 min.insync.replicas=2 --partitions 6 --topic logs topic created using the above command. Node 1 server.properties broker.id=0 listeners=PLAINTEXT://:9092 advertised.listeners=PLAINTEXT://10.0.2.4:9092

Wait for List of ListenAbleFuture returned by Kafka Send API

感情迁移 提交于 2021-01-29 21:56:01
问题 I have the List of ListenAbleFuture.I want to wait of this List of ListenableFuture<SendResult<Integer, String>> for atmost 15 minutes if they have not comepleted. How can i achieve it. Currently i am doing this but this wait for 15 min for every ListenAbleFuture which is what i dont want. for (ListenableFuture<SendResult<Integer, String>> m : myFutureList) { m.get(15, TimeUnit.MINUTES) ; } ListenableFuture<SendResult<Integer, String>> is from import org.springframework.util.concurrent