Kafka to hdfs sink Missing required configuration “confluent.topic.bootstrap.servers” which has no default value

纵然是瞬间 提交于 2020-06-23 16:45:35

问题


Status

My HDFS was installed via ambari, HDP. I'm Currently trying to load kafka topics into HDFS sink. Kafka and HDFS was installed in the same machine x.x.x.x. I didn't change much stuff from the default settings, except some port that according to my needs.

Here is how i execute kafka:

/usr/hdp/3.1.4.0-315/kafka/bin/connect-standalone.sh /etc/kafka/connect-standalone.properties /etc/kafka-connect-hdfs/quickstart-hdfs.properties

Inside connect-standalone.properties

bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

inside quickstart-hdfs.properties

name=hdfs-sink
#connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://x.x.x.x:9000
flush.size=3

Here are the results i get when excute it:

[2020-06-23 03:26:00,918] INFO Started o.e.j.s.ServletContextHandler@71d9cb05{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:855)
[2020-06-23 03:26:00,928] INFO Started http_8083@329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:292)
[2020-06-23 03:26:00,928] INFO Started @10495ms (org.eclipse.jetty.server.Server:410)
[2020-06-23 03:26:00,928] INFO Advertised URI: http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2020-06-23 03:26:00,928] INFO REST server listening at http://x.x.x.x:8083/, advertising URL http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:217)
[2020-06-23 03:26:00,928] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:55)
[2020-06-23 03:26:00,959] ERROR Failed to create job for quickstart-hdfs.properties (org.apache.kafka.connect.cli.ConnectStandalone:102)
[2020-06-23 03:26:00,960] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
        at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:415)
        at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2020-06-23 03:26:00,961] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2020-06-23 03:26:00,961] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:223)
[2020-06-23 03:26:00,964] INFO Stopped http_8083@329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:341)
[2020-06-23 03:26:00,965] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167)
[2020-06-23 03:26:00,972] INFO Stopped o.e.j.s.ServletContextHandler@71d9cb05{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1045)
[2020-06-23 03:26:00,974] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:241)
[2020-06-23 03:26:00,974] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:95)
[2020-06-23 03:26:00,974] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:184)
[2020-06-23 03:26:00,974] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:67)
[2020-06-23 03:26:00,975] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:205)
[2020-06-23 03:26:00,975] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:112)
[2020-06-23 03:26:00,975] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)

I'm really new in kafka and hdfs envinronment. Any suggestion and help will be appreciated so much. Thank you

edit: i've add my connect-standalone.properties into

bootstrap.servers=x.x.x.x:6667
confluent.license=
confluent.topic.bootstrap.server=10.64.2.236:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

nothing changes it still showing the same log error

EDIT

quickstart-hdfs.properties

name=hdfs-sink
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://ambari.int.cbn.net.id:8020
flush.size=3
confluent.license=
confluent.topic.bootstrap.servers=x.x.x.x:6667

connect-standalone.properties

bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java,/usr/share/confluent-hub-components

it returning different error as

[2020-06-23 08:34:31,812] INFO Couldn't start Hdfs3SinkConnector: (io.confluent.connect.hdfs3.Hdfs3SinkTask:85)
org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1800)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:59)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3150)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1126)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:707)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

        at io.confluent.connect.hdfs3.storage.HdfsStorage.create(HdfsStorage.java:78)
        at io.confluent.connect.hdfs3.DataWriter.createDir(DataWriter.java:530)
        at io.confluent.connect.hdfs3.DataWriter.<init>(DataWriter.java:211)
        at io.confluent.connect.hdfs3.DataWriter.<init>(DataWriter.java:91)
        at io.confluent.connect.hdfs3.Hdfs3SinkTask.start(Hdfs3SinkTask.java:76)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:190)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1800)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:59)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3150)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1126)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:707)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2421)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2395)
        at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1325)
        at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1322)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1339)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1314)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2275)
        at io.confluent.connect.hdfs3.storage.HdfsStorage.create(HdfsStorage.java:76)
        ... 13 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1800)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:59)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3150)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1126)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:707)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
        at org.apache.hadoop.ipc.Client.call(Client.java:1457)
        at org.apache.hadoop.ipc.Client.call(Client.java:1367)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy47.mkdirs(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:653)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy48.mkdirs(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2419)
        ... 21 more
[2020-06-23 08:34:31,815] INFO Shutting down Hdfs3SinkConnector. (io.confluent.connect.hdfs3.Hdfs3SinkTask:86)
[2020-06-23 08:34:31,815] INFO The connector relies on offsets in HDFS filenames, but does commit these offsets to Connect to enable monitoring progress of the HDFS connector. Upon startup, the HDFS Connector restores offsets from filenames in HDFS. In the absence of files in HDFS, the connector will attempt to find offsets for its consumer group in the '__consumer_offsets' topic. If offsets are not found, the consumer will rely on the reset policy specified in the 'consumer.auto.offset.reset' property to start exporting data to HDFS. (io.confluent.connect.hdfs3.Hdfs3SinkTask:93)
[2020-06-23 08:34:31,815] INFO WorkerSinkTask{id=hdfs-sink-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:302)
[2020-06-23 08:34:31,829] INFO Cluster ID: CQkRgktxRZmGv_C-Q87ViQ (org.apache.kafka.clients.Metadata:273)
[2020-06-23 08:34:31,830] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] Discovered group coordinator 10.64.2.236:6667 (id: 2147482646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:677)
[2020-06-23 08:34:31,832] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:462)
[2020-06-23 08:34:31,832] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:509)
[2020-06-23 08:34:34,847] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] Successfully joined group with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:473)
[2020-06-23 08:34:34,850] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] Setting newly assigned partitions [test12-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:280)
[2020-06-23 08:34:34,863] INFO [Consumer clientId=consumer-1, groupId=connect-hdfs-sink] Resetting offset for partition test12-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher:583)
[2020-06-23 08:34:34,915] WARN WorkerSinkTask{id=hdfs-sink-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask:382)
[2020-06-23 08:34:34,916] ERROR WorkerSinkTask{id=hdfs-sink-0} Commit of offsets threw an unexpected exception for sequence number 1: null (org.apache.kafka.connect.runtime.WorkerSinkTask:260)
java.lang.NullPointerException
        at io.confluent.connect.hdfs3.Hdfs3SinkTask.preCommit(Hdfs3SinkTask.java:122)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:379)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:616)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:197)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2020-06-23 08:34:34,916] ERROR WorkerSinkTask{id=hdfs-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
java.lang.NullPointerException
        at io.confluent.connect.hdfs3.Hdfs3SinkTask.open(Hdfs3SinkTask.java:136)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:612)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:69)
        at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:672)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:283)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:422)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
        at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1181)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:444)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:317)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2020-06-23 08:34:34,917] ERROR WorkerSinkTask{id=hdfs-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)


回答1:


Here's the error:

Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.

The problem is that you've taken the config for the HDFS Sink connector, and changed the connector for a different one (HDFS 3 Sink), and this one has different configuration requirements.

You can follow the quickstart for the HDFS 3 sink connector, or fix your existing configuration by adding

confluent.topic.bootstrap.servers=10.64.2.236:6667
confluent.topic.replication.factor=1

Note: in your example you missed the s from confluent.topic.bootstrap.servers which is why it didn't work



来源:https://stackoverflow.com/questions/62526864/kafka-to-hdfs-sink-missing-required-configuration-confluent-topic-bootstrap-ser

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!