confluent

Kafka - error when producing from command line (character ('<' (code 60)): expected a valid value)

你。 提交于 2020-01-23 17:01:31
问题 I spinned on my laptop a Kafka in Docker (with docker-compose). After that, created new kafka topic with: kafka-topics --zookeeper localhost:2181 --create --topic simple --replication-factor 1 --partitions 1 (did not create schema in Schema Registry yet). Now trying to produce (based on this example - step 3 - https://docs.confluent.io/4.0.0/quickstart.html): kafka-avro-console-producer \ --broker-list localhost:9092 --topic simple \ --property value.schema='{"type":"record","name":"myrecord"

Kafka Connect can't find connector

别等时光非礼了梦想. 提交于 2020-01-23 08:29:06
问题 I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error: [2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108) java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are:

kafka-connect-jdbc : SQLException: No suitable driver only when using distributed mode

家住魔仙堡 提交于 2020-01-17 01:24:08
问题 We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ). connect-distributed.properties file- bootstrap.servers=IP1:9092,IP2:9092 group.id=connect-cluster key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.topic=connect-offsets offset.storage.replication.factor=2 config.storage.topic=connect-configs config.storage.replication.factor=2 status

Kafka Message migration

[亡魂溺海] 提交于 2020-01-15 09:20:29
问题 We are currently operating on Apache Kafka 0.10.1.1. We are migrating to Confluent Platform 5.X. The New cluster is setup completely on different set of physical nodes. While we are already working on upgrading the API(s), our application uses spring-boot , we are trying to figure out how do we migrate the messages? I need to maintain the same ordering of messages in the Target Cluster. Can I simply copy the messages? Do I need to republish the messages to Target cluster for successful

How to populate the cache in CachedSchemaRegistryClient without making a call to register a new schema?

扶醉桌前 提交于 2020-01-13 11:58:13
问题 we have a spark streaming application which integrates with Kafka, I'm trying to optimize it because it makes excessive calls to Schema Registry to download schema. The avro schema for our data rarely changes, and currently our application calls the Schema Registry whenever a record comes in, which is way too much. I ran into CachedSchemaRegistryClient from confluent, and it looked promising. Though after looking into its implementation I'm not sure how to use its built-in cache to reduce the

Debezium from MySQL to Postgres with JDBC Sink - change of transforms.route.replacement gives a SinkRecordField error

最后都变了- 提交于 2020-01-06 08:05:23
问题 I am using this debezium-examples source.json { "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "mysql", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "database.server.name": "dbserver1", "database.whitelist": "inventory", "database.history.kafka.bootstrap.servers": "kafka:9092", "database.history.kafka.topic": "schema-changes

Why Kafka jdbc connect insert data as BLOB instead of varchar

雨燕双飞 提交于 2020-01-06 07:01:18
问题 I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code. package producer.serialized.avro; import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import java.util

Confluent Kafka REST Proxy asking to create new INSTANCE every time

本秂侑毒 提交于 2020-01-06 05:35:06
问题 I am following the Confluent Kafka documentation for REST Proxy use. https://docs.confluent.io/current/kafka-rest/docs/intro.html In the "Produce and Consume JSON Messages" part, I have been using the following APIs I.) POST : "http://localhost:8082/topics/jsontest" Header : "Content-Type: application/vnd.kafka.json.v2+json" "Accept: application/vnd.kafka.v2+json" JSON BODY : {"records":[{"value":{"foo":"bar"}}]} II.) POST : "http://localhost:8082/consumers/my_json_consumer" Header : "Content

Kafka consumer startup delay confluent dotnet

旧街凉风 提交于 2020-01-02 06:28:31
问题 When starting up a confluent-dotnet consumer , after the call to subscribe and subsequent polling, it seems to take a very long time to receive the "Partition assigned" event from the server, and therefore messages (about 10-15sec). At first I thought there was a auto topic creation overhead, but the time is the same whether the topic/consumer group of the consumer already exist or not. I start my consumer with this config, the rest of the code is the same as in the confluent advanced

Start Confluent Schema Registry in windows

让人想犯罪 __ 提交于 2020-01-01 02:44:18
问题 I have windows environment and my own set of kafka and zookeeper running. To use custom objects, I started to use Avro. But I needed to get the registry started. Downloaded Confluent platform and ran this: $ ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties /c/Confluent/confluent-3.0.0-2.11/confluent-3.0.0/bin/schema-registry-run-class: line 103: C:\Program: No such file or directory Then I see this on the installation page: "Confluent does not currently support