confluent-platform

Kafka - uncompacted topics Vs compacted topics

别说谁变了你拦得住时间么 提交于 2020-08-06 05:40:23
问题 I come across the following two phrases from the book "Mastering Kafka Streams and ksqlDB" and author used two terms, what does they really mean "compacted topics" and "uncompacted topics" Does they got anything to with respect to "log compaction" ? Tables can be thought of as updates to a database. In this view of the logs, only the current state (either the latest record for a given key or some kind of aggregation) for each key is retained. Tables are usually built from compacted topics .

Using a connector with Helm-installed Kafka/Confluent

你说的曾经没有我的故事 提交于 2020-08-02 09:44:28
问题 I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so: helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360 The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of

Using a connector with Helm-installed Kafka/Confluent

混江龙づ霸主 提交于 2020-08-02 09:44:12
问题 I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so: helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360 The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of

Kafka connector to hdfs: java.io.FileNotFoundException: File does not exist

爷,独闯天下 提交于 2020-07-28 04:57:45
问题 Everything was installed via ambari, HDP. I've ingested a sample file to kafka. The topic is testjson . Data ingested from csv file in filebeat. topics successfully ingested into kafka. /bin/kafka-topics.sh --list --zookeeper localhost:2181 result: test test060920 test1 test12 testjson From kafka i would like to ingest testjson to hdfs. quickstart-hdfs.properties name=hdfs-sink connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector tasks.max=1 topics=testjson hdfs.url=hdfs://x.x.x.x

Upserting into multiple tables from multiples topics using kafka-connect

你离开我真会死。 提交于 2020-07-21 03:17:49
问题 I am trying to read 2 kafka topics using JDBC sink connector and upsert into 2 Oracle tables which I manually created it. Each table has 1 primary key I want to use it in upsert mode. Connector works fine if I use only for 1 topic and only 1 field in pk.fields but if I enter multiple columns in pk.fields one from each table it fails to recognize the schema. Am I missing any thing please suggest. name=oracle_sink_prod connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1

KSQL streams - Get data from Array of Struct

折月煮酒 提交于 2020-07-18 07:40:11
问题 My JSON looks like: { "Obj1": { "a": "abc", "b": "def", "c": "ghi" }, "ArrayObj": [ { "key1": "1", "Key2": "2", "Key3": "3", }, { "key1": "4", "Key2": "5", "Key3": "6", }, { "key1": "7", "Key2": "8", "Key3": "9", } ] } I have written KSQL streams to convert it to AVRO and save to a topic, So that I can push it to JDBC Sink connector CREATE STREAM Example1(ArrayObj ARRAY<STRUCT<key1 VARCHAR, Key2 VARCHAR>>,Obj1 STRUCT<a VARCHAR>)WITH(kafka_topic='sample_topic', value_format='JSON'); CREATE

Kafka JDBC Sink Connector gives a Null Pointer Exception for message with schema having an optional field

时光总嘲笑我的痴心妄想 提交于 2020-07-07 11:25:22
问题 Kafka JDBC Sink Connector gives a Null Pointer Exception for a message with schema having an optional field here 'parentId'. Have I missed anything? I am using out of the box JSONConverter and JDBC Sink Connector A message on Kafka topic is { "schema":{ "type":"struct", "fields":[ { "field":"id", "type":"string" }, { "field":"type", "type":"string" }, { "field":"eventId", "type":"string" }, { "field":"parentId", "type":"string", "optional":true }, { "field":"created", "type":"int64", "name":

NullPointerException when connecting Confluent Kafka and InfluxDB

前提是你 提交于 2020-06-29 04:49:16
问题 I'm trying to use the Confluent InfluxDB Sink Connector to get data from a kafka topic into my InfluxDB. Firstly, I transmit data to kafka topic from a log file by using nifi, and it works well. The kafka topic get the data, like below: { "topic": "testDB5", "key": null, "value": { "timestamp": "2019-03-20 01:24:29,461", "measurement": "INFO", "thread": "NiFi Web Server-795", "class": "org.apache.nifi.web.filter.RequestLogger", "message": "Attempting request for (anonymous) }, "partition": 0,

Kafka commands for API to produce message, create consumer group, subscribe to consumer group and read records

為{幸葍}努か 提交于 2020-06-29 03:54:40
问题 I'm using below commands to produce and create consumer groups, facing issue with consumer group not being created,. Can you please help..? kafka-producer-perf-test --topic test-one --throughput 1000 --record-size 50 --num-records 10 --producer-props bootstrap.servers='localhost:9092' --producer.config producer.properties kafka-consumer-perf-test --consumer.config consumer.properties --bootstrap-servers='localhost:9092' --topic test-one --messages 1 Command is able to produce a message but

Kafka commands for API to produce message, create consumer group, subscribe to consumer group and read records

丶灬走出姿态 提交于 2020-06-29 03:54:13
问题 I'm using below commands to produce and create consumer groups, facing issue with consumer group not being created,. Can you please help..? kafka-producer-perf-test --topic test-one --throughput 1000 --record-size 50 --num-records 10 --producer-props bootstrap.servers='localhost:9092' --producer.config producer.properties kafka-consumer-perf-test --consumer.config consumer.properties --bootstrap-servers='localhost:9092' --topic test-one --messages 1 Command is able to produce a message but