confluent-platform

Kafka offset management

∥☆過路亽.° 提交于 2019-12-11 00:15:39
问题 We are using Kafka 0.10... I'm seeing some conflicting information online (and in documentation) regarding how offsets are managed in kafka when enable.auto.commit is TRUE. Does the same poll() method that retrieves messages also handle the commits at the configured intervals? If i retrieve messages from poll in a single threaded application, process the messages to completion (including handling errors) in the SAME thread, meaning poll() will not be invoked again until after my processing is

Kafka Elasticsearch Connector Timestamps

梦想与她 提交于 2019-12-02 05:04:35
问题 I can see this has been discussed a few times here for instance but I think the solutions are out of date due to breaking changes in Elasticsearch. I'm trying to convert a long/epoch field in my Json in my Kafka topic to an Elasticsearch date type which is pushed through the connector. When I try to add a dynamic mapping, my Kafka connect updates fail because Im trying to apply two mappings to a field, _doc and kafkaconnect. This was a breaking change around version 6 I believe where you can

kafka connect - jdbc sink sql exception

守給你的承諾、 提交于 2019-12-02 02:27:57
问题 I am using the confluent community edition for a simple setup consisting a rest client calling the Kafka rest proxy and then pushing that data into an oracle database using the provided jdbc sink connector. I noticed that if there is an sql exception for instance if the actual data's length is greater than the actual one (column's length defined), the task stopped and if I do restart it, same thing it tries to insert the erroneous entry and it stopped. It does not insert the other entries. Is

Kafka Elasticsearch Connector Timestamps

我的梦境 提交于 2019-12-02 01:53:52
I can see this has been discussed a few times here for instance but I think the solutions are out of date due to breaking changes in Elasticsearch. I'm trying to convert a long/epoch field in my Json in my Kafka topic to an Elasticsearch date type which is pushed through the connector. When I try to add a dynamic mapping, my Kafka connect updates fail because Im trying to apply two mappings to a field, _doc and kafkaconnect. This was a breaking change around version 6 I believe where you can only have one mapping per index. { "index_patterns": [ "depart_details" ], "mappings": { "dynamic

Is it possible to use multiple left join in Confluent KSQL query? tried to join stream with more than 1 tables , if not then whats the solution?

假如想象 提交于 2019-12-01 23:42:40
Stream : describe ammas; Field | Type ------------------------------------- ROWTIME | BIGINT (system) ROWKEY | VARCHAR(STRING) (system) ID | INTEGER ------------------------------------- For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>; Table-01 : ksql> show tables; Table Name | Kafka Topic | Format | Windowed ------------------------------------------------- ANNAT | anna | DELIMITED | false APPAT | appa | DELIMITED | false ------------------------------------------------- Trying to Join stream vs. table-01 is working as expected. Create stream finalstream as