kafka-consumer-api

What does a dash represents in CURRENT-OFFSET

帅比萌擦擦* 提交于 2021-02-04 21:15:47
问题 Referring below screenshot of consumer-group description, i am trying to understand what does "-" means here for CURRENT-OFFSET. Does it says that messages are not consumed from partition 1 & 3 even though the partitions are allocated to a consumer. LOG-END offset for partition 1 & 3 are 281 & 277 respectively . 回答1: CURRENT-OFFSET means the current max offset of the consumed messages of the partition for this consumer instance, whereas LOG-END-OFFSET is the offset of the latest message in

Kafka multiple topic consume

女生的网名这么多〃 提交于 2021-02-04 19:31:26
问题 consumer.subscribe(Pattern.compile(".*"),new ConsumerRebalanceListener() { @Override public void onPartitionsRevoked(Collection<TopicPartition> clctn) { } @Override public void onPartitionsAssigned(Collection<TopicPartition> clctn) { } }); How to consume all topics with regex in apache/kafka? I tried above code, but it didn't work. 回答1: For regex use the following signature KafkaConsumer.subscribe(Pattern pattern, ConsumerRebalanceListener listener) E.g. the following code snippet enables the

How to reset the retry count in Spring Kafka consumer when the exception thrown in the first retry is different from the second retry?

懵懂的女人 提交于 2021-01-29 22:34:48
问题 I am trying to implement a Kafka retry consumer in spring-boot and using SeekToCurrentErrorHandler for the retries. I have set the backoff policy to have 5 retry attempts. My question is, lets say the first attempt of the retry, the exception was 'database not available', and the second attempt db was available but there is another failure at another step like a timeout, in this case will the retry count goes back to zero and starts a fresh or will continue to try only of the remaining

Spring Kafka Consumer, rewind consumer offset to go back 'n' records

守給你的承諾、 提交于 2021-01-29 20:37:25
问题 I'm using "programmatic" way of consuming messages from Kafka topic using org.springframework.kafka.listener.ConcurrentMessageListenerContainer I'm wondering if there's a "spring" way of rewinding offsets for a specific partitions of a topic to go back 'n' messages? Would like to know the cleanest way of doing this (programmatically and not using the CLI). 回答1: If you want to reset the offsets during application startup, use a ConsumerAwareRebalanceListener and perform the seeks on the

What is the delay time between each poll

不羁的心 提交于 2021-01-29 16:06:25
问题 In kafka documentation i'm trying to understand this property max.poll.interval.ms The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. This mean each poll will happen

Kafka Connect SMT to add Kafka header fields

一曲冷凌霜 提交于 2021-01-29 13:59:01
问题 I need to find or write an SMT that will add header fields to a request. The request is missing some type fields and I want to add them. How exactly do you add a header within an SMT all I have seen are just record transforms like below but what if its the header I want to change or add a field to? private R applySchemaless(R record) { final Map<String, Object> value = requireMap(operatingValue(record), PURPOSE); // record.headers.add(Header) but how do I define the header // or record

Kafka consumer returns no records

三世轮回 提交于 2021-01-29 10:23:31
问题 I am trying to makea small PoC with Kafka. However, when making the consumer in java, this consumer gets no messages. Even though when I fire up a kafka-console-consumer.sh with the same url/topic, I do get messages. Does anyone know what I might do wrong? This code is called by a GET API. public List<KafkaTextMessage> receiveMessages() { log.info("Retrieving messages from kafka"); val props = new Properties(); // See https://kafka.apache.org/documentation/#consumerconfigs props.put(

Kafka Consumer Failed to load SSL keystore (Logstash ArcSight module) for any keystore type and path

柔情痞子 提交于 2021-01-29 02:17:25
问题 I need to supply a certificate for client authentication for Kafka Consumer, however, it always fails with the following exception ( Failed to load SSL keystore ): ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol

No current assignment for partition occurs even after poll in Kafka

耗尽温柔 提交于 2021-01-22 12:08:19
问题 I have Java 8 application working with Apache Kafka 2.11-0.10.1.0. I need to use the seek feature to poll old messages from partitions. However I faced an exception of No current assignment for partition which is occurred every time I am trying to seekByOffset . Here's my class which is responsible for seek ing topics to the specified timestamp: import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients

Kafka Consumer offset commit check to avoid committing smaller offsets

核能气质少年 提交于 2021-01-04 05:34:22
问题 We assume that we have a consumer that sends a request to commit offset 10. If there is a communication problem and the broker didn't get the request and of course didn't respond. After that we have another consumer process another batch and successfully committed offset 20. Q: I want to know if there is a way or property to handle so we can check if the previous offset in the log are committed or not before committing in our case the offset 20? 回答1: The scenario you are describing can only