I am trying to setup a Kafka cluster (the first node in the cluster actually).
I have a single node zookeeper cluster setup. I am setting up kafka on a separate node.
When you run > bin/kafka-console-consumer.sh
command kafka loads a ConsoleConsumer
, which will attempt to create a consumer with an auto generated consumer id. The way Kafka generates the consumer id is to concatenate the name of the local host to it. So, in the problem was the fact that java could not resolve the ip address for local host on the Open Stack VM I am working with.
So the answer was that the Open Stack VM was resolving the local host name to kafka
, which is the name of the VM. I had everything setup in the Kafka and Zookeeper instances as kafka1
.
So, when java was calling getLocalHost, it was trying to find the IP Address for kafka
, which I did not have in my /etc/hosts file.
I simply added an entry for kafka
in my /etc/hosts file and everything started working wonderfully!!!
I would have thought it would resolve to localhost
, but it did not, it resolved to the name of the vm, kafka
.
As noplay pointed out the issue was that Kafka wasn't able to resolve the correct IP, this may happen for example on you EC2 instances running in private subnets without assignment of public IP. The solution summarized:
hostname
Which will show you the host name, something like ip-10-180-128-217. Then just update your /etc/hosts
sudo nano /etc/hosts
edit, e.g.
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ip-10-180-128-217