kafka-producer-api

Kafka configuration min.insync.replicas not working

∥☆過路亽.° 提交于 2020-01-06 08:11:40
问题 Its my early days in learning kafka. And I am checking out every kafka property/concept in my local machine. So I came across this property min.insync.replicas and here is my understanding. Please correct me if I've misunderstood anything. Once a message is sent to a topic, the message must be written to at least min.insync.replicas number of followers. min.insync.replicas also includes the leader. If number of available live brokers( indirectly, in sync replicas ) are less than the specified

Connect CometD client with Kafka producer

对着背影说爱祢 提交于 2020-01-06 03:18:14
问题 Is it possible to connect cometD client with Kafka producer? Any suggestions? Currently I am having a CometD client in python which is extracting data real time from a Salesforce object. Now I want to push that data into Kafka producer. Is it possible to do that? And how? 回答1: Solved. By using https://github.com/dkmadigan/python-bayeux-client to extract the events from Salesforce, I was able to push into the Kafka broker. 来源: https://stackoverflow.com/questions/50615641/connect-cometd-client

I can't produce data to Kafka when use the script, But I can list topics with the script

◇◆丶佛笑我妖孽 提交于 2020-01-05 05:57:32
问题 everybody,there is a virtual server in the local area network which ip is 192.168.18.230 , and my machine ip is 192.168.0.175 . Today, I try to use my machine ( 192.168.0.175 ) to send some messages to my virtual server( 192.168.18.230 ), with the Kafka console producer $ bin/kafka-console-producer.sh --broker-list 192.168.18.230:9092 --topic test but there is something wrong. The description of the problem is : [2017-04-10 17:25:40,396] ERROR Error when sending message to topic test with key

Excessive console messages from Kafka Producer

允我心安 提交于 2020-01-04 05:46:15
问题 How do you control the console logging level of a Kafka Producer or Consumer? I am using the Kafka 0.9 API in Scala. Every time send on the KafkaProducer is called, the console gives output like below. Could this indicate I do not have the KafkaProducer set up correctly, rather than just an issue of excessive logging? 17:52:21.236 [pool-10-thread-7] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: compression.type = none metric.reporters = [] metadata.max.age.ms = 300000 . . . 17

handling broker down in kafka

旧时模样 提交于 2020-01-03 02:48:06
问题 I'am using kafka producer in async mode but when all brokers are down it acts like sync and it waits until metadata.fetch.timeout.ms expires which is 60 sec for my case. My first question, is this is a normal behaviour or am i doing something wrong? Since transactions in my logic should finish in maximum 100 ms, this timeout value is a really big delay for me. Perhaps setting metadata.fetch.timeout.ms to 10 ms may solve my problem but i am not sure how this effect my system. Does this cause a

Kafka Stream reprocessing old messages on rebalancing

拈花ヽ惹草 提交于 2020-01-02 23:14:17
问题 I have a Kafka Streams application which reads data from a few topics, joins the data and writes it to another topic. This is the configuration of my Kafka cluster: 5 Kafka brokers Kafka topics - 15 partitions and replication factor 3. My Kafka Streams applications are running on the same machines as my Kafka broker. A few million records are consumed/produced per hour. Whenever I take a broker down, the application goes into rebalancing state and after rebalancing many times it starts

Kafka Stream reprocessing old messages on rebalancing

懵懂的女人 提交于 2020-01-02 23:14:16
问题 I have a Kafka Streams application which reads data from a few topics, joins the data and writes it to another topic. This is the configuration of my Kafka cluster: 5 Kafka brokers Kafka topics - 15 partitions and replication factor 3. My Kafka Streams applications are running on the same machines as my Kafka broker. A few million records are consumed/produced per hour. Whenever I take a broker down, the application goes into rebalancing state and after rebalancing many times it starts

When does Kafka Leader Election happen?

喜你入骨 提交于 2020-01-02 02:42:10
问题 When and how often does Kafka High Level Producer elect a leader? Does it do before sending each message or only once at the time of creating connection? 回答1: Every broker has a information about the list of topics(and partitions) and their leaders which will be kept up to date by the zoo keeper whenever the new leader is elected or when the number of partition changes. Thus, when the producer makes a call to one of the brokers, it responds with this information list. Once the producer

Start Confluent Schema Registry in windows

让人想犯罪 __ 提交于 2020-01-01 02:44:18
问题 I have windows environment and my own set of kafka and zookeeper running. To use custom objects, I started to use Avro. But I needed to get the registry started. Downloaded Confluent platform and ran this: $ ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties /c/Confluent/confluent-3.0.0-2.11/confluent-3.0.0/bin/schema-registry-run-class: line 103: C:\Program: No such file or directory Then I see this on the installation page: "Confluent does not currently support

Topics, partitions and keys

点点圈 提交于 2019-12-31 09:14:08
问题 I am looking for some clarification on the subject. In Kafka documentations I found the following: Kafka only provides a total order over messages within a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over messages this can be achieved with a topic that has only one partition, though this will mean only one consumer process per