问题
i connect to server through ssh, i launch my zookeper kafka, and my debezium connector, after a while only the kafka terminal tab get's kicked out with the following error
packet_write_wait: Connection to **.**.***.*** port 22: Broken pipe
and my connector output is:
>>>>[2019-07-10 10:04:49,563] WARN [Producer clientId=producer-1] >>>>Connection to node 0 (ip-***.**.**.***.eu-
>>>>west-3.compute.internal/***.**.**.***:9092) could not be established.
>>>>Broker may not be available.
>>>>(org.apache.kafka.clients.NetworkClient:725)
>>>>[2019-07-10 10:04:49,676] ERROR WorkerSourceTask{id=mongodb-source-
>>>>connector-0} Failed to flush, timed out while waiting for producer to
>>>>flush outstanding 8 messages
>>>>(org.apache.kafka.connect.runtime.WorkerSourceTask:420)
>>>>[2019-07-10 10:04:49,676] ERROR WorkerSourceTask{id=mongodb-source-
>>>>connector-0} Failed to commit offsets
>>>>(org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:111)
i don't want to restart manually everytime that happends, how can i fix this so i can only ssh one time launch the servers and connector then exit?.
回答1:
Two options:
- Start the processes as a service (ref)
- Use a tool such as screen or tmux so that the session persists even after you close the connection.
Option (1) is how you do it in production. Option (2) is really handy for when you're in development, using VPNs, disconnecting/reconnecting etc — because not only does the process keep running, but you can also reconnect to your session as it was when you disconnected. Here's an example of what it is and how to use it: https://www.rittmanmead.com/blog/2012/05/screen-and-obiee/
回答2:
Alright so what i did was sudo systemctl enable confluent-zookeeper sudo systemctl enable confluent-kafka sudo systemctl start confluent-zookeeper i got a acces to file error, i've chmod it and now zookeeper works fine. sudo systemctl start confluent-kafka i got a error still couldn't fix , this is the output
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.j
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.kafka.common.record.FileRecords.openChannel(FileRecords.java:4
at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:410)
at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:419)
来源:https://stackoverflow.com/questions/56968510/how-to-fix-broker-may-not-be-available-after-broken-pipe