kafka-consumer-api

Kafka manual ackMode MANUAL_IMMEDIATE what if not acknowledge

ぐ巨炮叔叔 提交于 2021-02-11 13:53:34
问题 I use Spring KafKa anf I set ackMode to MANUAL_IMMEDIATE props.setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE); the scenario is that for some reason my app could not acknowledge ( acknowledgment.acknowledge() ) and just miss it without exception. 1- How can I set consumer retry for missed message 2- How configure to call a function after max retry count that I configured reached 回答1: See the documentation about SeekToCurrentErrorHandlers. When the listener throws an

Reading messages for specific timestamp in kafka

匆匆过客 提交于 2021-02-10 14:32:48
问题 I want to read all the messages starting from a specific time in kafka. Say I want to read all messages between 0600 to 0800 Request messages between two timestamps from Kafka suggests the solution as the usage of offsetsForTimes. Problem with that solution is : If say my consumer is switched on everyday at 1300. The consumer would not have read any messages that day, which effectively means no offset was committed at/after 0600, which means offsetsForTimes(< partitionname > , <0600 for that

How to get messages from Kafka Consumer one by one in java?

狂风中的少年 提交于 2021-02-08 11:43:46
问题 I'm using Apache Kafka API and trying to get only one message at a time. I'm only writing to one topic. I can send and receive messages by having a pop UI screen with a textbox. I input a string in the textbox and click "send." I can send as many messages as I want. Let's say I send 3 messages and my 3 messages were "hi," "lol," "bye." There is also a "receive" button. Right now, using the traditional code found in TutorialsPoint, I get all 3 messages (hi, lol, bye) at once printed on the

Kafka 0.9.0 New Java Consumer API fetching duplicate records

爱⌒轻易说出口 提交于 2021-02-08 10:35:30
问题 I am new to kafka and i am trying to prototype a simple consumer-producer message queue (traditional queue) model using Apache kafka 0.9.0 Java clients. From the producer process, i am pushing 100 random messages to a topic configured with 3 partitions. This looks fine. I created 3 consumer threads with same group id, subscribed to the same topic. auto commit enabled. Since all 3 consumer threads are subscribed to same topic i assume that each consumer will get a partition to consume and will

Understanding Kafka poll(), flush() & commit()

我与影子孤独终老i 提交于 2021-02-08 03:28:25
问题 I’m new to Kafka and trying out few small usecase for my new application. The use case is basically, Kafka-producer —> Kafka-Consumer—> flume-Kafka source—>flume-hdfs-sink. When Consuming(step2), below is the sequence of steps.. 1. consumer.Poll(1.0) 1.a. Produce to multiple topics (multiple flume agents are listening) 1.b. Produce. Poll() 2. Flush() every 25 msgs 3. Commit() every msgs (asynchCommit=false) Question 1: Is this sequence of action right!?! Question2: Will this cause any data

Design Kafka consumers and producers for scalability

Deadly 提交于 2021-02-07 10:50:30
问题 I want to design a solution for sending different kinds of e-mails to several providers. The general overview. I have several upstream providers Sendgrid, Zoho, Mailgun and etc. They will be used to send e-mails and etc. For example: E-mail for Register new user E-mail for Remove user E-mail for Space Quota limit (in general around 6 types of e-mails) Every type of e-mail should be generated into Producers, converted into Serialized Java Object and Send to the appropriate Kafka Consumer

Object not serializable (org.apache.kafka.clients.consumer.ConsumerRecord) in Java spark kafka streaming

十年热恋 提交于 2021-02-07 07:10:22
问题 I am pretty sure that I am pushing data only string and deserialize also as String. Record I pushed it is showing in error also. But why suddenly it is showing such type of error, Is there anything I am missing? Here is below code, import java.util.HashMap; import java.util.HashSet; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; import java.util.regex.Pattern;

Object not serializable (org.apache.kafka.clients.consumer.ConsumerRecord) in Java spark kafka streaming

时光总嘲笑我的痴心妄想 提交于 2021-02-07 07:08:41
问题 I am pretty sure that I am pushing data only string and deserialize also as String. Record I pushed it is showing in error also. But why suddenly it is showing such type of error, Is there anything I am missing? Here is below code, import java.util.HashMap; import java.util.HashSet; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; import java.util.regex.Pattern;

Object not serializable (org.apache.kafka.clients.consumer.ConsumerRecord) in Java spark kafka streaming

纵饮孤独 提交于 2021-02-07 07:07:27
问题 I am pretty sure that I am pushing data only string and deserialize also as String. Record I pushed it is showing in error also. But why suddenly it is showing such type of error, Is there anything I am missing? Here is below code, import java.util.HashMap; import java.util.HashSet; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; import java.util.regex.Pattern;

Differentiating between non-existent and un-authorized topic in librdkafka

醉酒当歌 提交于 2021-02-05 11:14:14
问题 How can I make sure if a topic is authorized or not ? I need this because, in my consumer I get the meta data for all the known topics and then do assign call. The metadata call doesn't give the un-authorized topics and non-existent topic. If a topic doesn't exist, I'll create one and if a topic is unauthorized, I have to fail. But I don't have a way to differentiate between non-existent and unauthorized topic. 回答1: You can try listing all the topics, if the topic exists it will be there in