kafka-producer-api

Kafka connect tutorial stopped working

末鹿安然 提交于 2019-12-12 03:37:24
问题 I was following step #7 (Use Kafka Connect to import/export data) at this link: http://kafka.apache.org/documentation.html#quickstart It was working well until I deleted the 'test.txt' file. Mainly because that's how log4j files would work. After certain time, the file will get rotated - I mean - it will be renamed & a new file with the same name will start getting written to. But after, I deleted 'test.txt', the connector stopped working. I restarted connector, broker, zookeeper etc, but the

KAFKA - What is reason for getting ProducerFencedException during producer.send

我只是一个虾纸丫 提交于 2019-12-11 22:14:56
问题 Trying to load around 50K messages into KAFKA topic. In the beginning of few runs getting below exception but not all the time. org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:784) ~[kafka-clients-2.0.0.jar:?] at org.apache.kafka.clients.producer.internals.TransactionManager.beginAbort(TransactionManager.java:229) ~[kafka

Distributing data socket among kafka cluster nodes

≯℡__Kan透↙ 提交于 2019-12-11 19:17:31
问题 I want to get data from socket and put it to kafka topic that my flink program can read data from topic and process it. I can do that on one node. But I want to have a kafka cluster with at least three different nodes(different IP address) and poll data from socket to distribute it among nodes.I do not know how to do this and change this code. My simple program is in following: public class WordCount { public static void main(String[] args) throws Exception { kafka_test objKafka=new kafka

Getting Kafka usage details

纵饮孤独 提交于 2019-12-11 17:48:39
问题 I am trying to find ways to get current usage statistics for my kafka cluster. I am looking to collect following information: Number of topics in kafka cluster Number of partitions per kafka broker Number of active consumers and producers Number of client connections per kafka broker Number of messages on each partition, size of disk etc. Lagging replicas, consumer lag etc. Active consumer groups Any other statistics that can and should be collected, currently I am looking at collecting the

Spring Cloud Stream for Kafka with consumer/producer API exactly once semantics with transaction-id-prefix is not working as expected

烈酒焚心 提交于 2019-12-11 17:22:32
问题 I have scenario where am seeing different behavior. Like total of 3 different services First service will listen from Solace queue and produce it to kafka topic-1 (where transaction are enabled) Second Service will listen from above kafka topic-1 and write it to another kafka topic-2 (where we have no manual commits, transactions enabled to produce to other topic, auto commit offset as false & isolation.level is set to read_commited) ago Delete Third Service will listen from kafka topic-2 and

How To create a kafka topic from java for KAFKA-2.1.1-1.2.1.1?

Deadly 提交于 2019-12-11 16:56:59
问题 I am working on java interface which would take user input of topic name, replication and partition to create a kafka topic in KAFKA-2.1.1-1.2.1.1. This is code that I have used from other sources but it seems to be for previous version of kafka import kafka.admin.AdminOperationException; import org.I0Itec.zkclient.ZkClient; import org.I0Itec.zkclient.ZkConnection; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.admin.AdminUtils; import kafka.utils

How to use map function in my code below instead of collect function?

独自空忆成欢 提交于 2019-12-11 16:24:19
问题 Kafka producer tries to send bulk message (10k) at a time but after transferring 3k it's start throwing error. I have used collect function in below code that might be a problem so I want to replace that with map. //Memberjson is JavaRDD<String> for (String outputMbrJson : memberjson.collect()) { try { Producer kafkaProducer = new Producer(configFile); kafkaProducer.runProducer(Arrays.asList(outputMbrJson).iterator()); } catch (Throwable e) { e.printStackTrace(); } } -------------------------

Caused by: java.lang.ClassNotFoundException: io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

流过昼夜 提交于 2019-12-11 15:42:53
问题 Trying to publish message to kafka topic using rest proxy by Confluent platform using this command and responds with an error as mentioned below Request: $ curl -X POST -H "Content-Type: application/vnd.kafka.avro.v2+json" \ -H "Accept: application/vnd.kafka.v2+json" \ --data '{"value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}", "records": [{"value": {"name": "test name"}}]}' \ "http://${RESTPROXY_HOST}:8082/topics/${TOPIC}"

Why kafka-avro-console-producer doesn't honour the default value for the field?

白昼怎懂夜的黑 提交于 2019-12-11 09:03:27
问题 Although default is defined for a field, kafka-avro-console-producer ignores it completely: $ kafka-avro-console-producer --broker-list localhost:9092 --topic test-avro \ --property schema.registry.url=http://localhost:8081 --property \ value.schema='{"type":"record","name":"myrecord1","fields": \ [{"name":"f1","type":"string"},{"name": "f2", "type": "int", "default": 0}]}' {"f1": "value1"} org.apache.kafka.common.errors.SerializationException: Error deserializing json {"f1": "value1"} to

How to reach Kafka on private network from outside?

假如想象 提交于 2019-12-11 08:28:39
问题 I have a server with Zookeeper and Kafka in private network on 10.242.44.55 . I have forwarded port on gateway from [public_ip]:39092 to 10.242.44.55:9092 . I took the following settings for Kafka from another question: listeners=INTERNAL://:9092,EXTERNAL://:39092 advertised.listeners=INTERNAL://10.242.44.55:9092,EXTERNAL://[public_ip]:39092 listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT inter.broker.listener.name=INTERNAL Everything works fine in private network. I can