kafka-producer-api

Kafka 0.9.0.1 Java Consumer stuck in awaitMetadataUpdate()

≯℡__Kan透↙ 提交于 2019-12-29 09:35:12
问题 I'm trying to get a simple Kafka Consumer to work using the Java API v0.9.0.1. The kafka server I'm using is a docker container, also running version 0.9.0.1. Below is the consumer code: public class Consumer { public static void main(String[] args) throws IOException { KafkaConsumer<String, String> consumer; try (InputStream props = Resources.getResource("consumer.props").openStream()) { Properties properties = new Properties(); properties.load(props); consumer = new KafkaConsumer<>

Kafka java producer and consumer with ACL enabled with topic

♀尐吖头ヾ 提交于 2019-12-25 09:06:39
问题 I'm bit confused with kafka ACL configuration, where we configure authorization for producer and consumer. There are various examples showing producing/consuming message using command line. Do we need any extra configuration to produce/consume messages using JAVA api to/from secured kafka topic. 回答1: @Apollo : This question is quite vague.. If you want to learn ACL/ SSL it will take some time.. the below link might help you to get started. https://github.com/Symantec/kafka-security-0.9 回答2:

Getting serialization error when publish message to KAFKA topic

一世执手 提交于 2019-12-25 02:35:22
问题 I'm using program variables to create configuration objects and loading the schema from a local path which also been registered in kafka. Creating data object and using "Generic Record" method of serializing. var logMessageSchema =(Avro.RecordSchema)Avro.Schema.Parse(File.ReadAllText(@"C:\StatusMessageSchema\FileStatusMessageSchema.txt")); var record = new GenericRecord(logMessageSchema); record.Add("SystemID", "100"); record.Add("FileName", "ABS_DHCS"); record.Add("FileStatus", "3009");

Getting serialization error when publish message to KAFKA topic

喜夏-厌秋 提交于 2019-12-25 02:35:08
问题 I'm using program variables to create configuration objects and loading the schema from a local path which also been registered in kafka. Creating data object and using "Generic Record" method of serializing. var logMessageSchema =(Avro.RecordSchema)Avro.Schema.Parse(File.ReadAllText(@"C:\StatusMessageSchema\FileStatusMessageSchema.txt")); var record = new GenericRecord(logMessageSchema); record.Add("SystemID", "100"); record.Add("FileName", "ABS_DHCS"); record.Add("FileStatus", "3009");

Remotely accessing Kafka running inside kubernetes

末鹿安然 提交于 2019-12-25 01:42:37
问题 I have a single node Kafka broker running inside a pod on a single node kubernetes environment. I am using this image for kafka: https://hub.docker.com/r/wurstmeister/kafka kafka version = 1.1.0 Kubernetes cluster is running inside a VM on a server. The VM has the following IP on the active interface ens32 - 192.168.3.102 Kafka.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: casb-deployment name: kafkaservice spec: replicas: 1 template: metadata: labels: app:

Cannot produce Message when Main Thread sleep less than 1000

浪尽此生 提交于 2019-12-24 18:39:47
问题 When I am using the Java API of Kafka,if I let my main Thread sleep less than 2000ns,it cannot prodece any message.I really want to know why this happen? Here is my producer: public class Producer { private final KafkaProducer<String, String> producer; private final String topic; public Producer(String topic, String[] args) { //...... //...... producer = new KafkaProducer<>(props); this.topic = topic; } public void producerMsg() throws InterruptedException { String data = "Apache Storm is a

How to Access kafka brokers secured by kerberos from Eclispe running on windows 7

强颜欢笑 提交于 2019-12-24 15:12:23
问题 I have this java code trying to add messages to a kafka queue String msgID = UUID.randomUUID().toString(); Properties prop = new Properties(); prop.put("metadata.broker.list", DEFAULT_BROKER); prop.put("serializer.class", "kafka.serializer.StringEncoder"); prop.put("request.required.acks", "-1"); prop.put("producer.type", "async"); ProducerConfig config = new ProducerConfig(prop); Producer<String, String> producer = new Producer<String, String>(config); KeyedMessage<String, String> message =

KafkaProducer sendOffsetsToTransaction need offset+1 to successfully commit current offset

霸气de小男生 提交于 2019-12-24 09:31:23
问题 I'm trying to achieve a transaction in a Kafka Processor to make sure I don't reprocess the same message twice. Given a message (A) I need to create a list of messages that will be produced on another topic in a transaction and i want to commit the original message (A) in the same transaction. From the documentation I found the Producer method sendOffsetsToTransaction which seems to be able to commit an offset in a transaction only if it succeeds. This is the code inside the process() method

Kafka: Is there a Kalfa clients API for scala?

放肆的年华 提交于 2019-12-24 08:58:52
问题 I am just starting with Kafka, it sounds really good for Microservices, but I work essentially in Scala. I added kafka to my sbt project with this: libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0" Then I do this: import org.apache.kafka.clients.producer.{Callback,KafkaProducer, Producer} ... val producer = new KafkaProducer[String, String](props) val record = new ProducerRecord[String, String]("my-topic", "key", "value") val fut = producer.send(record, callBack) ... My problem

How does librdkafka producer learn about new topic partitions in Kafka

跟風遠走 提交于 2019-12-24 07:36:24
问题 I'm running rdkafka_simple_producer.c to produce messages to a Kafka cluster. I have one topic and 30 partitions. Using the default round-robin partitioner. While the producer is working and generating messages to Kafka, I add more partitions to Kafka kafka/bin/kafka-topics.sh --alter --zookeeper server2:2181 --topic demotest --partitions 40 I'd expect the producer to realize about the change and eventually begin to produce to all 40 topics. However, at the end I only see data was produced to