Kafka Connect can't find connector

前端 未结 3 1796
无人及你
无人及你 2021-01-02 19:00

I\'m trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:

[2018-11-21 14:48:29,096] ERROR S         


        
相关标签:
3条回答
  • 2021-01-02 19:37

    The compiled JAR needs to be available to Kafka Connect. You have a few options here:

    1. Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.

    2. Build the JAR yourself. This typically involves:

      cd kafka-connect-elasticsearch-5.0.1
      mvn clean package
      

      Then take the resulting kafka-connect-elasticsearch-5.0.1.jar JAR and put it in a path as configured in Kafka Connect with plugin.path.

    You can find more info on using Kafka Connect here:

    • https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
    • https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
    • https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/

    Disclaimer: I work for Confluent, and wrote the above blog posts.

    0 讨论(0)
  • 2021-01-02 19:54

    The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java).

    It also needs to be the parent directory of other directories which are containing those plug-ins.

    plugin.path=/opt/kafka-connect/plugins/
    

    Where

    $ ls - lR /opt/kafka-connect/plugins/
    kafka-connect-elasticsearch-x.y.z/
        file1.jar
        file2.jar 
        etc
    

    Ref - Manually installing Community Connectors

    The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java folder of the Confluent package installation in the plugin path as well

    If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post that matches a Confluent release.

    An even simpler option would be to grab the package using Confluent Hub CLI

    And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that

    0 讨论(0)
  • 2021-01-02 19:55

    I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):

    • download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
    • put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -

      plugin.path=/plugins
      

      so tree looks something like this:

      /plugins/
      └── jdbcconnector
          └──assets
          └──doc
          └──etc
          └──lib
      

      Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar

    • Now you can try to run connector

      ./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
      

      connect-standalone.properties are common properties needed for kafka-connect, in my case:

      bootstrap.servers=localhost:9092
      key.converter=org.apache.kafka.connect.json.JsonConverter
      value.converter=org.apache.kafka.connect.json.JsonConverter
      key.converter.schemas.enable=true
      value.converter.schemas.enable=true
      offset.storage.file.filename=/tmp/connect.offsets
      offset.flush.interval.ms=10000
      plugin.path=/plugins
      rest.port=8086
      rest.host.name=127.0.0.1
      

      jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html

    0 讨论(0)
提交回复
热议问题