apache-kafka

CDI context in Kafka de-/serializer in Quarkus app

别等时光非礼了梦想. 提交于 2021-02-10 09:24:51
问题 I have a Quarkus project with Smallrye reactive messaging based on Kafka. Since I want to work with a "complex pojo" I need a custom de-/serializer. I'd like to make those two classes CDI beans so I can inject and use my custom logger, which is a CDI bean. Is there a way to achieve this? Right now my injected logger object is simply null: import org.apache.kafka.common.serialization.Serializer; import javax.enterprise.context.ApplicationScoped; import javax.inject.Inject; @ApplicationScoped

Can the JDBC Kafka Connector pull data from multiple databases?

a 夏天 提交于 2021-02-10 07:17:08
问题 I would love to setup a cluster of JDBC Kafka Connectors and configure them to pull from multiple databases running on the same host. I've been looking through the Kafka Connect documentation, and it appears that after you configure the JDBC connector that it can only pull data from a single database. Can anyone confirm this? 回答1: Depending on the mode you start your workers (standalone or distributed) : In the standalone mode , you can start multiple jdbc connectors by using : bin/connect

Manually setting Kafka consumer offset

老子叫甜甜 提交于 2021-02-10 07:14:26
问题 In our project, there are Active Kafka servers( PR) and Passive Kafka servers (DR), both Kafka brokers are configured with the same group name, topic name and partition in our project. When switching from PR to DR the _consumer_offsets is manually set on DR. My question here is, would the Kafka consumer be able to seamlessly consume the messages from where it was last read? 回答1: When replicating messages across 2 clusters, it's not possible to ensure offsets stay in sync. For example, if a

Manually setting Kafka consumer offset

匆匆过客 提交于 2021-02-10 07:14:14
问题 In our project, there are Active Kafka servers( PR) and Passive Kafka servers (DR), both Kafka brokers are configured with the same group name, topic name and partition in our project. When switching from PR to DR the _consumer_offsets is manually set on DR. My question here is, would the Kafka consumer be able to seamlessly consume the messages from where it was last read? 回答1: When replicating messages across 2 clusters, it's not possible to ensure offsets stay in sync. For example, if a

Delete Messages from a Topic in Apache Kafka

≡放荡痞女 提交于 2021-02-10 03:26:35
问题 So I am new to working with Apache Kafka and I am trying to create a simple app so I can try to understand the API better. I know this question has been asked a lot here, but how can I clear out the messages/records that are stored on a topic? Most of the answers I have seen say to change the message retention time or to delete & recreate the topic. Neither of these are options for me as I do not have access to the server.properties file. I am not running Kafka locally, it is hosted on a

Delete Messages from a Topic in Apache Kafka

不羁岁月 提交于 2021-02-10 03:20:56
问题 So I am new to working with Apache Kafka and I am trying to create a simple app so I can try to understand the API better. I know this question has been asked a lot here, but how can I clear out the messages/records that are stored on a topic? Most of the answers I have seen say to change the message retention time or to delete & recreate the topic. Neither of these are options for me as I do not have access to the server.properties file. I am not running Kafka locally, it is hosted on a

Frequent “offset out of range” messages, partitions deserted by consumer

丶灬走出姿态 提交于 2021-02-09 11:00:37
问题 We are running 3 node Kafka 0.10.0.1 cluster. We have a consumer application which has a single consumer group connecting to multiple topics. We are seeing strange behaviour in consumer logs. With these lines Fetch offset 1109143 is out of range for partition email-4, resetting offset Fetch offset 952168 is out of range for partition email-7, resetting offset Fetch offset 945796 is out of range for partition email-5, resetting offset Fetch offset 950900 is out of range for partition email-0,

Frequent “offset out of range” messages, partitions deserted by consumer

放肆的年华 提交于 2021-02-09 11:00:20
问题 We are running 3 node Kafka 0.10.0.1 cluster. We have a consumer application which has a single consumer group connecting to multiple topics. We are seeing strange behaviour in consumer logs. With these lines Fetch offset 1109143 is out of range for partition email-4, resetting offset Fetch offset 952168 is out of range for partition email-7, resetting offset Fetch offset 945796 is out of range for partition email-5, resetting offset Fetch offset 950900 is out of range for partition email-0,

No pending reply: ConsumerRecord

有些话、适合烂在心里 提交于 2021-02-09 10:57:54
问题 I am trying to use ReplyingKafkaTemplate, and intermittently I keep seeing the message below. No pending reply: ConsumerRecord(topic = request-reply-topic, partition = 8, offset = 1, CreateTime = 1544653843269, serialized key size = -1, serialized value size = 1609, headers = RecordHeaders(headers = [RecordHeader(key = kafka_correlationId, value = [-14, 65, 21, -118, 70, -94, 72, 87, -113, -91, 92, 72, -124, -110, -64, -94])], isReadOnly = false), key = null, with correlationId: [

No pending reply: ConsumerRecord

ぃ、小莉子 提交于 2021-02-09 10:57:07
问题 I am trying to use ReplyingKafkaTemplate, and intermittently I keep seeing the message below. No pending reply: ConsumerRecord(topic = request-reply-topic, partition = 8, offset = 1, CreateTime = 1544653843269, serialized key size = -1, serialized value size = 1609, headers = RecordHeaders(headers = [RecordHeader(key = kafka_correlationId, value = [-14, 65, 21, -118, 70, -94, 72, 87, -113, -91, 92, 72, -124, -110, -64, -94])], isReadOnly = false), key = null, with correlationId: [