confluent-kafka

Kafka producer unexpected behaviour

纵然是瞬间 提交于 2019-12-25 00:05:52
问题 I am running into a strange behaviour with my Kafka producer and consumer. Below is my setup on my local machine 1 zookeper node 2 kafka broker nodes 1 producer (doing async writes) and 1 subscriber written in go using this library I am creating a topic using kafka's command line tool as below ./kafka-topics.sh --zookeeper localhost:2181 --create --topic foo --partitions 1 --replication-factor 2 --config min.insync.replicas=2 The issue is that whenever i kill leader node of the partition, the

Kafka AvroConsumer consume from timestamp using offsets_for_times

亡梦爱人 提交于 2019-12-24 07:47:48
问题 Trying to use confluent_kafka.AvroConsumer to consume messages from a given time stamp. if flag: # creating a list topic_partitons_to_search = list( map(lambda p: TopicPartition('my_topic2', p, int(time.time())), range(0, 1))) print("Searching for offsets with %s" % topic_partitons_to_search) offsets = c.offsets_for_times(topic_partitons_to_search, timeout=1.0) print("offsets_for_times results: %s" % offsets) for x in offsets: c.seek(x) flag=False console returns this Searching for offsets

Kafka Configuration for only seeing last 5 minutes of data

感情迁移 提交于 2019-12-20 05:58:04
问题 Sorry i am new in Kafka and this question migth be so easy but i need some help. i did not figure out some configurations. There is a stream data, i want Consumers to see only last 5 minutes of messages that procuders sent. I am using Confluent.Kafka for .Net, var config = new Dictionary<string, object>{ {"group.id","Test1Costumers"}, {"bootstrap.servers",brokerEndpoint}, { "auto.commit.interval.ms", 60000}, { "auto.offset.reset", "earliest" } }; Here is config dictionary of Consumers in

Kafka integration in unity3d throwing Win32Exception error

谁都会走 提交于 2019-12-20 01:58:24
问题 I am trying to run a code sample of Kafka in unity environment and for this reason, I created a consumer client (Code given below). using System.Collections; using System.Collections.Generic; using UnityEngine; using Confluent.Kafka; using Confluent.Kafka.Serialization; using System.Text; public class KafkaConsumer : MonoBehaviour { // Use this for initialization void Start () { /* * The consumer application will then pick the messages from the same topic and write them to console output. *

Confluent Control Center Interceptor

梦想的初衷 提交于 2019-12-13 03:19:02
问题 How do I add Confluent Control Center Interceptor to an existing S3(Sink) Connector? To monitor the Sink. I am looking for documentation. Any help is appreciated. 回答1: To be absolutely clear, you need interceptors on your sink and source . If you don't, you can't monitor your pipelines with Confluent Control Center as it stands today. To enable interceptors in Kafka Connect, add to the worker properties file: consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor

`apt-get install librdkafka1` fails on Debian 9.x due to libssl dependency

拜拜、爱过 提交于 2019-12-12 19:06:12
问题 Basic apt-get install librdkafka1 works on Debian 8.x but fails on Debian 9.x. This looks like a dependency version issue regarding libssl. Debian 8.x had libssl1.0.0 and Debian 9.x has libssl1.0.2 and libssl1.1, but no libssl1.0.0 and this version bump just causes the librdkafka1 install to break. This is easily reproducible on the latest official Docker Debian 9 image: docker pull debian:9 docker run --rm -it debian:9 Then within the VM cat /etc/debian_version # 9.4 apt-get update # Get

Not able to access messages from confluent kafka on EC2

夙愿已清 提交于 2019-12-11 17:16:12
问题 Confluent Kafka 5.0.0 has been installed on AWS EC2 which has Public IP say 54.XX.XX.XX Opened port 9092 on the EC2 machine with 0.0.0.0 In /etc/kafka/server.properties I have advertised.listeners=PLAINTEXT://54.XX.XX.XX:9092 listeners=PLAINTEXT://0.0.0.0:9092 In /etc/kafka/producer.properties I have bootstrap.servers=0.0.0.0:9092 on local machine In /etc/kafka/consumer.properties I have bootstrap.servers=54.XX.XX.XX:9092 In the EC2, started kafka 'confluent start' and created 'mytopic' My

Kafka: Does Confluent’s HDFS connector support Snappy compression?

此生再无相见时 提交于 2019-12-11 16:20:07
问题 I don't see any configurations for compression on the HDFS connector docs https://docs.confluent.io/current/connect/connect-hdfs/docs/configuration_options.html. Does it support compression? If yes, what do I need to add in the properties file? 回答1: Snappy compression was recently added to the HDFS Connector for Avro. To enable it you'll need to set the property avro.codec to snappy . With Parquet it's been available since the beginning and it is the codec used when exporting parquet files.

.NET Kerberos from Windows to Linux (different realms)

半世苍凉 提交于 2019-12-11 06:10:45
问题 Qn: If i've diff kerberos Realms, and the broker sits on Linux and producer sits on windows, how do enable the connectivity using Kerberos? I have valid keytab. and here is the krb5 Please see marked answer to this question in this link. Connect to Kafka on Unix from Windows with Kerberos below question is continuation for 3rd scenario explained by @Samson. answering some of Samson's suggestions, 1 default realm is added in krb5. 2.there is one way trust. the broker domain trusts my domain.

libsasl dependency issues when installing librdkafka1 via yum on aws linux machine

淺唱寂寞╮ 提交于 2019-12-10 23:09:37
问题 I'm trying to install the python confluent-kafka package using pip. I'm attempting this on an aws ec2 instance that is running amazon linux (version Amazon Linux AMI release 2016.09). I'm simply doing: pip install pip install confluent-kafka This however produces the following error: In file included from confluent_kafka/src/confluent_kafka.c:17:0: confluent_kafka/src/confluent_kafka.h:21:32: fatal error: librdkafka/rdkafka.h: No such file or directory #include <librdkafka/rdkafka.h> ^