confluent

Kafka JDBC source connector time stamp mode failing for sqlite3

夙愿已清 提交于 2019-12-11 06:34:29
问题 I tried to set up a database with two tables in sqlite. Once of my table is having a timestamp column . I am trying to implement timestamp mode to capture incremental changes in the DB. Kafka connect is failing with the below error: ERROR Failed to get current time from DB using Sqlite and query 'SELECT CURRENT_TIMESTAMP' (io.confluent.connect.jdbc.dialect.SqliteDatabaseDialect:471) java.sql.SQLException: Error parsing time stamp Caused by: java.text.ParseException: Unparseable date: "2019-02

Kafka Connect - File Source Connector error

人盡茶涼 提交于 2019-12-11 06:14:06
问题 I am playing with Conluent Platform/Kafka Connect and similar things and I wanted to run few examples. I followed quickstart from here. It means: Install Confluent Platform (v3.2.1) Run Zookeeper, Kafka Broker and Schema Register Run example for reading file data (witk Kafka Connect) I ran this command (number 3): [root@sandbox confluent-3.2.1]# ./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka/connect-file-source.properties but got this result:

Error retrieving Avro schema for id 1, Subject not found.; error code: 40401

雨燕双飞 提交于 2019-12-11 06:11:31
问题 Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 1 Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Subject not found.; error code: 40401 Confluent Version 4.1.0 I am consuming data from a couple of topics(topic_1, topic_2) using KTable, joining the data and then pushing the data onto another topic(topic_out) using KStream. (Ktable.toStream()) The data is in avro format When I check the schema by

Kafka AVRO - conversion from long to datetime

妖精的绣舞 提交于 2019-12-11 02:29:14
问题 I get the following error when I want to send an AVRO message which contains a field that has the type long: Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 61 Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to org.joda.time.DateTime I use Confluent 3.2.0 and Apache Spark 2.2.0. This error is thrown in a Spark Job which processes AVRO messages and prints them in a console. In the AVRO schema, the corresponding

Specify version when deploying to nexus from maven

那年仲夏 提交于 2019-12-10 22:25:47
问题 I've forked Confluent's Kafka Connect HDFS writer and now I'd like to deploy a version of this jar to my local Nexus. mvn clean deploy Works like a charm and deploys the jar. https://[nexus]/repository/releases/io/confluent/kafka-connect-hdfs/5.0.0/kafka-connect-hdfs-5.0.0.jar So far so good, but to make a distinction between the confluent versions and my own deployment I'd like to change the version of the build to something like 5.0.0-1 or so (preferably the tag name when pushed, but that's

Kafka JDBC Connect query causes ORA-00933: SQL command not properly ended

允我心安 提交于 2019-12-08 07:37:19
问题 I have this Oracle SQL query: SELECT * FROM (SELECT SO_ORDER_KEY,QUEUE_TYPE,SYS_NO, DENSE_RANK() OVER (PARTITION BY SO_ORDER_KEY ORDER BY SYS_NO DESC) ORDER_RANK FROM TSY940) WHERE ORDER_RANK=1; When running in SQL developer, it returns the desired result. For some reason when I use this query in the kafka-connect-jdbc properties I get ERROR Failed to run query for table TimestampIncrementingTableQuerier{name='null', query='SELECT * FROM (SELECT SO_ORDER_KEY,QUEUE_TYPE,SYS_NO,DENSE_RANK()

How to fetch Kafka source connector schema based on connector name

*爱你&永不变心* 提交于 2019-12-08 06:51:11
问题 I am using Confluent JDBC Kafka connector to publish messages into topic. The source connector will send data to topic along with schema on each poll. I want to retrieve this schema. Is it possible? How? Can anyone suggest me My intention is to create a KSQL stream or table based on schema build by Kafka connector on poll. 回答1: The best way to do this is to use Avro, in which the schema is stored separately and automatically used by Kafka Connect and KSQL. You can use Avro by configuring

What are the benefits of the Kafka REST Proxy API?

可紊 提交于 2019-12-08 04:51:10
问题 I do not know the advantages of the Kafka REST Proxy API. It's a REST API, so I know it's handy for administration. Why do people use the Kafka REST Proxy API? Is it burdensome to add a Maven dependency on a producer or a consumer? Also, I know that the kafka client has better performance. 回答1: You wouldn't use it for performance Administration - You can make a single ACL for only REST Proxy communications Integrate with non-JVM languages that have no Kafka libraries. For example, client side

Kafka - error when producing from command line (character ('<' (code 60)): expected a valid value)

喜欢而已 提交于 2019-12-07 07:25:38
I spinned on my laptop a Kafka in Docker (with docker-compose). After that, created new kafka topic with: kafka-topics --zookeeper localhost:2181 --create --topic simple --replication-factor 1 --partitions 1 (did not create schema in Schema Registry yet). Now trying to produce (based on this example - step 3 - https://docs.confluent.io/4.0.0/quickstart.html ): kafka-avro-console-producer \ --broker-list localhost:9092 --topic simple \ --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' Entering value: {"f1": "value1"} Error: {"f1": "value1"}

Kafka Streams with lookup data on HDFS

情到浓时终转凉″ 提交于 2019-12-07 02:37:22
问题 I'm writing an application with Kafka Streams (v0.10.0.1) and would like to enrich the records I'm processing with lookup data. This data (timestamped file) is written into a HDFS directory on daily basis (or 2-3 times a day). How can I load this in the Kafka Streams application and join to the actual KStream ? What would be the best practice to reread the data from HDFS when a new file arrives there? Or would it be better switching to Kafka Connect and write the RDBMS table content to a