Kafka connect cluster setup or launching connect workers

后端 未结 2 1635
梦毁少年i
梦毁少年i 2021-02-04 15:11

I am going through kafka connect, and i am trying to get the concepts.

Let us say I have kafka cluster (nodes k1, k2 and k3) setup and it is running, now i want to run

2条回答
  •  一向
    一向 (楼主)
    2021-02-04 15:52

    1) In order to have a highly available kafka-connect service you need to run at least two instances of connect-distributed.sh on two distinct machines that have the same group.id. You can find more details regarding the configuration of each worker here. For improved performance, Connect should be ran independently of the broker and Zookeeper machines.

    2) Yes, you need to place all your connectors under plugin.path (normally under /usr/share/java/) on every machine that you are planning to run kafka-connect.

    3) kafka-connect will load the connectors on startup. You don't need to handle this. Note that if your kafka-connect instance is running and a new connector is added, you need to restart the service.

    4) You need to have Java installed on all your machines. For Confluent Platform particularly:

    Java 1.7 and 1.8 are supported in this version of Confluent Platform (Java 1.9 is currently not supported). You should run with the Garbage-First (G1) garbage collector. For more information, see the Supported Versions and Interoperability.

    5) It depends. Confluent was founded by the original creators of Apache Kafka and it comes as a more complete distribution adding schema management, connectors and clients. It also comes with KSQL which is quite useful if you need to act on certain events. Confluent simply adds on top of the Apache Kafka distribution, it's not a modified version.

提交回复
热议问题